Search results for: the finite element method FEM
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20732

Search results for: the finite element method FEM

13682 Simulation of Turbulent Flow in Channel Using Generalized Hydrodynamic Equations

Authors: Alex Fedoseyev

Abstract:

This study explores Generalized Hydrodynamic Equations (GHE) for the simulation of turbulent flows. The GHE was derived from the Generalized Boltzmann Equation (GBE) by Alexeev (1994). GBE was obtained by first principles from the chain of Bogolubov kinetic equations and considered particles of finite dimensions, Alexeev (1994). The GHE has new terms, temporal and spatial fluctuations compared to the Navier-Stokes equations (NSE). These new terms have a timescale multiplier τ, and the GHE becomes the NSE when τ is zero. The nondimensional τ is a product of the Reynolds number and the squared length scale ratio, τ=Re*(l/L)², where l is the apparent Kolmogorov length scale, and L is a hydrodynamic length scale. The turbulence phenomenon is not well understood and is not described by NSE. An additional one or two equations are required for the turbulence model, which may have to be tuned for specific problems. We show that, in the case of the GHE, no additional turbulence model is needed, and the turbulent velocity profile is obtained from the GHE. The 2D turbulent channel and circular pipe flows were investigated using a numerical solution of the GHE for several cases. The solutions are compared with the experimental data in the circular pipes and 2D channels by Nicuradse (1932, Prandtl Lab), Hussain and Reynolds (1975), Wei and Willmarth (1989), Van Doorne (2007), theory by Wosnik, Castillo and George (2000), and the relevant experiments on Superpipe setup at Princeton, data by Zagarola (1996) and Zagarola and Smits (1998), the Reynolds number is from Re=7200 to Re=960000. The numerical solution data compared well with the experimental data, as well as with the approximate analytical solution for turbulent flow in channel Fedoseyev (2023). The obtained results confirm that the Alexeev generalized hydrodynamic theory (GHE) is in good agreement with the experiments for turbulent flows. The proposed approach is limited to 2D and 3D axisymmetric channel geometries. Further work will extend this approach by including channels with square and rectangular cross-sections.

Keywords: comparison with experimental data. generalized hydrodynamic equations, numerical solution, turbulent boundary layer, turbulent flow in channel

Procedia PDF Downloads 54
13681 Financial Analysis of Feasibility for a Heat Utilization System Using Rice Straw Pellets: Heating Energy Demand and the Collection and Storage Method in Nanporo, Japan

Authors: K.Ishii, T. Furuichi, A. Fujiyama, S. Hariya

Abstract:

Rice straw pellets are a promising fuel as a renewable energy source. Financial analysis is needed to make a utilization system using rise straw pellets financially feasible, considering all regional conditions including stakeholders related to the collection and storage, production, transportation and heat utilization. We conducted the financial analysis of feasibility for a heat utilization system using rice straw pellets which has been developed for the first time in Nanporo, Hokkaido, Japan. Especially, we attempted to clarify the effect of factors required for the system to be financial feasibility, such as the heating energy demand and collection and storage method of rice straw. The financial feasibility was found to improve when increasing the heating energy demand and collecting wheat straw in August separately from collection of rice straw in November because the costs of storing rice straw and producing pellets were reduced. However, the system remained financially unfeasible. This study proposed a contractor program funded by a subsidy from Nanporo local government where a contracted company, instead of farmers, collects and transports rice straw in order to ensure the financial feasibility of the system, contributing to job creation in the region.

Keywords: rice straw, pellets, heating energy demand, collection, storage

Procedia PDF Downloads 394
13680 Error Analysis in Academic Writing of EFL Learners: A Case Study for Undergraduate Students at Pathein University

Authors: Aye Pa Pa Myo

Abstract:

Writing in English is accounted as a complex process for English as a foreign language learners. Besides, committing errors in writing can be found as an inevitable part of language learners’ writing. Generally, academic writing is quite difficult for most of the students to manage for getting better scores. Students can commit common errors in their writings when they try to write academic writing. Error analysis deals with identifying and detecting the errors and also explains the reason for the occurrence of these errors. In this paper, the researcher has an attempt to examine the common errors of undergraduate students in their academic writings at Pathein University. The purpose of doing this research is to investigate the errors which students usually commit in academic writing and to find out the better ways for correcting these errors in EFL classrooms. In this research, fifty-third-year non-English specialization students attending Pathein University were selected as participants. This research took one month. It was conducted with a mixed methodology method. Two mini-tests were used as research tools. Data were collected with a quantitative research method. Findings from this research pointed that most of the students noticed their common errors after getting the necessary input, and they became more decreased committing these errors after taking mini-test; hence, all findings will be supportive for further researches related to error analysis in academic writing.

Keywords: academic writing, error analysis, EFL learners, mini-tests, mixed methodology

Procedia PDF Downloads 122
13679 Comparison between Pushover Analysis Techniques and Validation of the Simplified Modal Pushover Analysis

Authors: N. F. Hanna, A. M. Haridy

Abstract:

One of the main drawbacks of the Modal Pushover Analysis (MPA) is the need to perform nonlinear time-history analysis, which complicates the analysis method and time. A simplified version of the MPA has been proposed based on the concept of the inelastic deformation ratio. Furthermore, the effect of the higher modes of vibration is considered by assuming linearly-elastic responses, which enables the use of standard elastic response spectrum analysis. In this thesis, the simplified MPA (SMPA) method is applied to determine the target global drift and the inter-story drifts of steel frame building. The effect of the higher vibration modes is considered within the framework of the SMPA. A comprehensive survey about the inelastic deformation ratio is presented. After that, a suitable expression from literature is selected for the inelastic deformation ratio and then implemented in the SMPA. The estimated seismic demands using the SMPA, such as target drift, base shear, and the inter-story drifts, are compared with the seismic responses determined by applying the standard MPA. The accuracy of the estimated seismic demands is validated by comparing with the results obtained by the nonlinear time-history analysis using real earthquake records.

Keywords: modal analysis, pushover analysis, seismic performance, target displacement

Procedia PDF Downloads 351
13678 Study of Natural Patterns on Digital Image Correlation Using Simulation Method

Authors: Gang Li, Ghulam Mubashar Hassan, Arcady Dyskin, Cara MacNish

Abstract:

Digital image correlation (DIC) is a contactless full-field displacement and strain reconstruction technique commonly used in the field of experimental mechanics. Comparing with physical measuring devices, such as strain gauges, which only provide very restricted coverage and are expensive to deploy widely, the DIC technique provides the result with full-field coverage and relative high accuracy using an inexpensive and simple experimental setup. It is very important to study the natural patterns effect on the DIC technique because the preparation of the artificial patterns is time consuming and hectic process. The objective of this research is to study the effect of using images having natural pattern on the performance of DIC. A systematical simulation method is used to build simulated deformed images used in DIC. A parameter (subset size) used in DIC can have an effect on the processing and accuracy of DIC and even cause DIC to failure. Regarding to the picture parameters (correlation coefficient), the higher similarity of two subset can lead the DIC process to fail and make the result more inaccurate. The pictures with good and bad quality for DIC methods have been presented and more importantly, it is a systematic way to evaluate the quality of the picture with natural patterns before they install the measurement devices.

Keywords: Digital Image Correlation (DIC), deformation simulation, natural pattern, subset size

Procedia PDF Downloads 404
13677 The Impact of Air Pollution on Health and the Environment: The Case of Cement Beni-Saf, Western Algeria

Authors: N. Hachemi, I. Benmehdi, O. Hasnaoui

Abstract:

The air like water is an essential element for living beings. Each day, a man breathes about 20m3 of air. It originally consists of a set of gas whose presence and concentrations correspond to the needs of life. This study focuses on air pollution by smoke and dust emitted from the chimney of the cement works of Beni Saf, pathological and their impact on the environment. Dust of the cement plant are harmless to permissible levels for living organisms, but the two combined phenomena namely the release of dust and aridity of the climate, which severely marked area of Beni Saf; have contributed adverse effects in on human health and the degradation of vegetation cover and species especially weakened by environmental stress. The most visible impact is certainly the deposition of dust on the surrounding areas of the cement factory, and seriously affecting the aesthetics of the landscape. Health problems are more important inside and outside the factory. Among the diseases notable caused by the cement works are: deafness, heart disease, asthma and mental. The dust of the cement works is mainly composed of fine particles of limestone, clay, free lime, silicates and also loaded of the gases such as carbon dioxide gas CO2. The accumulation of this gas in the atmosphere is directly involved in the phenomenon of increasing of greenhouse effect. Some gases, for example, are directly toxic. They can change the climate, changing precipitation types and become a greater source of stress by drought, etc. The environment also suffers from air pollution indirectly; it is more precisely the acid rain. They are produced by the combustion of non-metals in air. Acid rain has consequences for contaminating the soil, weakening the flora, fauna and acidifies lakes. Finally, the pollution problems are multiple and specific dust. It can worsen and change, it has reached epidemic proportions quantitatively and qualitatively disturbing and unpredictable.

Keywords: atmospheric pollution, cement, dust, environment

Procedia PDF Downloads 319
13676 Biosensor: An Approach towards Sustainable Environment

Authors: Purnima Dhall, Rita Kumar

Abstract:

Introduction: River Yamuna, in the national capital territory (NCT), and also the primary source of drinking water for the city. Delhi discharges about 3,684 MLD of sewage through its 18 drains in to the Yamuna. Water quality monitoring is an important aspect of water management concerning to the pollution control. Public concern and legislation are now a day’s demanding better environmental control. Conventional method for estimating BOD5 has various drawbacks as they are expensive, time-consuming, and require the use of highly trained personnel. Stringent forthcoming regulations on the wastewater have necessitated the urge to develop analytical system, which contribute to greater process efficiency. Biosensors offer the possibility of real time analysis. Methodology: In the present study, a novel rapid method for the determination of biochemical oxygen demand (BOD) has been developed. Using the developed method, the BOD of a sample can be determined within 2 hours as compared to 3-5 days with the standard BOD3-5day assay. Moreover, the test is based on specified consortia instead of undefined seeding material therefore it minimizes the variability among the results. The device is coupled to software which automatically calculates the dilution required, so, the prior dilution of the sample is not required before BOD estimation. The developed BOD-Biosensor makes use of immobilized microorganisms to sense the biochemical oxygen demand of industrial wastewaters having low–moderate–high biodegradability. The method is quick, robust, online and less time consuming. Findings: The results of extensive testing of the developed biosensor on drains demonstrate that the BOD values obtained by the device correlated with conventional BOD values the observed R2 value was 0.995. The reproducibility of the measurements with the BOD biosensor was within a percentage deviation of ±10%. Advantages of developed BOD biosensor • Determines the water pollution quickly in 2 hours of time; • Determines the water pollution of all types of waste water; • Has prolonged shelf life of more than 400 days; • Enhanced repeatability and reproducibility values; • Elimination of COD estimation. Distinctiveness of Technology: • Bio-component: can determine BOD load of all types of waste water; • Immobilization: increased shelf life > 400 days, extended stability and viability; • Software: Reduces manual errors, reduction in estimation time. Conclusion: BiosensorBOD can be used to measure the BOD value of the real wastewater samples. The BOD biosensor showed good reproducibility in the results. This technology is useful in deciding treatment strategies well ahead and so facilitating discharge of properly treated water to common water bodies. The developed technology has been transferred to M/s Forbes Marshall Pvt Ltd, Pune.

Keywords: biosensor, biochemical oxygen demand, immobilized, monitoring, Yamuna

Procedia PDF Downloads 264
13675 Preference Heterogeneity as a Positive Rather Than Negative Factor towards Acceptable Monitoring Schemes: Co-Management of Artisanal Fishing Communities in Vietnam

Authors: Chi Nguyen Thi Quynh, Steven Schilizzi, Atakelty Hailu, Sayed Iftekhar

Abstract:

Territorial Use Rights for Fisheries (TURFs) have been emerged as a promising tool for fisheries conservation and management. However, illegal fishing has undermined the effectiveness of TURFs, profoundly degrading global fish stocks and marine ecosystems. Conservation and management of fisheries, therefore, largely depends on effectiveness of enforcing fishing regulations, which needs co-enforcement by fishers. However, fishers tend to resist monitoring participation, as their views towards monitoring scheme design has not been received adequate attention. Fishers’ acceptability of a monitoring scheme is likely to be achieved if there is a mechanism allowing fishers to engage in the early planning and design stages. This study carried out a choice experiment with 396 fishers in Vietnam to elicit fishers’ preferences for monitoring scheme and to estimate the relative importance that fishers place on the key design elements. Preference heterogeneity was investigated using a Scale-Adjusted Latent Class Model that accounts for both preference and scale variance. Welfare changes associated with the proposed monitoring schemes were also examined. It is found that there are five distinct preference classes, suggesting that there is no one-size-fits-all scheme well-suited to all fishers. Although fishers prefer to be compensated more for their participation, compensation is not a driving element affecting fishers’ choice. Most fishers place higher value on other elements, such as institutional arrangements and monitoring capacity. Fishers’ preferences are driven by their socio-demographic and psychological characteristics. Understanding of how changes in design elements’ levels affect the participation of fishers could provide policy makers with insights useful for monitoring scheme designs tailored to the needs of different fisher classes.

Keywords: Design of monitoring scheme, Enforcement, Heterogeneity, Illegal Fishing, Territorial Use Rights for Fisheries

Procedia PDF Downloads 314
13674 Authenticity of Ecuadorian Commercial Honeys

Authors: Elisabetta Schievano, Valentina Zuccato, Claudia Finotello, Patricia Vit

Abstract:

Control of honey frauds is needed in Ecuador to protect bee keepers and consumers because simple syrups and new syrups with eucalyptus are sold as genuine honeys. Authenticity of Ecuadorian commercial honeys was tested with a vortex emulsion consisting on one volume of honey:water (1:1) dilution, and two volumes of diethyl ether. This method allows a separation of phases in one minute to discriminate genuine honeys that form three phase and fake honeys that form two phases; 34 of the 42 honeys analyzed from five provinces of Ecuador were genuine. This was confirmed with 1H NMR spectra of honey dilutions in deuterated water with an enhanced aminoacid region with signals for proline, phenylalanine and tyrosine. Classic quality indicators were also tested with this method (sugars, HMF), indicators of fermentation (ethanol, acetic acid), and residues of citric acid used in the syrup manufacture. One of the honeys gave a false positive for genuine, being an admixture of genuine honey with added syrup, evident for the high sucrose. Sensory analysis was the final confirmation to recognize the honey groups studied here, namely honey produced in combs by Apis mellifera, fake honey, and honey produced in cerumen pots by Geotrigona, Melipona, and Scaptotrigona. This is a valuable contribution to protect honey consumers, and to develop the beekeeping industry in Ecuador.

Keywords: fake, genuine, honey, 1H NMR, Ecuador

Procedia PDF Downloads 377
13673 Application of Enzyme-Mediated Calcite Precipitation for Surface Control of Gold Mining Tailing Waste

Authors: Yogi Priyo Pradana, Heriansyah Putra, Regina Aprilia Zulfikar, Maulana Rafiq Ramadhan, Devyan Meisnnehr, Zalfa Maulida Insani

Abstract:

This paper studied the effects and mechanisms of fine-grained tailing by Enzyme-Mediated Calcite Precipitation (EMCP). Grouting solution used consists of reagents (CaCl₂ and (CO(NH₂)₂) and urease enzymes which react to produce CaCO₃. In sample preparation, the test tube is used to investigate the precipitation rate of calcite. The grouting solution added is 75 mL for one mold sample. The solution was poured into a mold sample up to as high as 5 mm from the top surface of the tailing to ensure the entire surface is submerged. The sample is left open in a cylinder for up to 3 days for curing. The direct mixing method is conducted so that the cementation process occurs by evenly distributed. The relationship between the results of the UCS test and the calcite precipitation rate likely indicates that the amount of calcite deposited in treated tailing could control the strength of the tailing. The sample results are analyzed using atomic absorption spectroscopy (AAS) to evaluate metal and metalloid content. Calcium carbonate deposited in the tailing is expected to strengthen the bond between tailing granules, which are easily slipped on the banks of the tailing dam. The EMCP method is expected to strengthen tailing in erosion-control surfaces.

Keywords: tailing, EMCP, UCS, AAS

Procedia PDF Downloads 124
13672 Analysis of the Annual Proficiency Testing Procedure for Intermediate Reference Laboratories Conducted by the National Reference Laboratory from 2013 to 2017

Authors: Reena K., Mamatha H. G., Somshekarayya, P. Kumar

Abstract:

Objectives: The annual proficiency testing of intermediate reference laboratories is conducted by the National Reference Laboratory (NRL) to assess the efficiency of the laboratories to correctly identify Mycobacterium tuberculosis and to determine its drug susceptibility pattern. The proficiency testing results from 2013 to 2017 were analyzed to determine laboratories that were consistent in reporting quality results and those that had difficulty in doing so. Methods: A panel of twenty cultures were sent out to each of these laboratories. The laboratories were expected to grow the cultures in their own laboratories, set up drug susceptibly testing by all the methods they were certified for and report the results within the stipulated time period. The turnaround time for reporting results, specificity, sensitivity positive and negative predictive values and efficiency of the laboratory in identifying the cultures were analyzed. Results: Most of the laboratories had reported their results within the stipulated time period. However, there was enormous delay in reporting results from few of the laboratories. This was mainly due to improper functioning of the biosafety level III laboratory. Only 40% of the laboratories had 100% efficiency in solid culture using Lowenstein Jensen medium. This was expected as a solid culture, and drug susceptibility testing is not used for diagnosing drug resistance. Rapid molecular methods such as Line probe assay and Genexpert are used to determine drug resistance. Automated liquid culture system such as the Mycobacterial growth indicator tube is used to determine prognosis of the patient while on treatment. It was observed that 90% of the laboratories had achieved 100% in the liquid culture method. Almost all laboratories had achieved 100% efficiency in the line probe assay method which is the method of choice for determining drug-resistant tuberculosis. Conclusion: Since the liquid culture and line probe assay technologies are routinely used for the detection of drug-resistant tuberculosis the laboratories exhibited higher level of efficiency as compared to solid culture and drug susceptibility testing which are rarely used. The infrastructure of the laboratory should be maintained properly so that samples can be processed safely and results could be declared on time.

Keywords: annual proficiency testing, drug susceptibility testing, intermediate reference laboratory, national reference laboratory

Procedia PDF Downloads 171
13671 Action Research of Local Resident Empowerment in Prambanan Cultural Heritage Area in Yogyakarta

Authors: Destha Titi Raharjana

Abstract:

The finding of this research results from three action researches conducted in three rurals, namely Bokoharjo, Sambirejo, and Tirtomartani. Those rurals are close to Prambanan, a well-known cultural heritage site located in Sleman Regency, Indonesia. This action research is conducted using participative method through observation, interview, and focus group discussion with local residents as the subjects. This research aims to (a) present identifications of potencies, obstacles, and opportunities existed in development process, which is able to give more encouragement, involvement and empowerment for local residents in maintaining the cultural heritage area, (b) present participatory empowerment programs which adjust the needs of local residents and human resources, and (c) identify potential stakeholders that can support empowerment programs. Through action research method, this research is able to present (a) potential mapping; difficulties and opportunities in the development process in each rural, (b) empowerment program planning needed by local residents as a follow-up of this action research. Moreover, this research also presents identifications of potential stakeholders who are able to do an empowerment program follow-up. It is expected that, at the end of the programs, the local residents are able to maintain Prambanan, as one of cultural heritage sites that needs to be protected, in a more sustainable way.

Keywords: action research, local resident, empowerment, cultural heritage area, Prambanan, Sleman, Indonesia

Procedia PDF Downloads 235
13670 Modeling of Conjugate Heat Transfer including Radiation in a Kerosene/Air Certification Burner

Authors: Lancelot Boulet, Pierre Benard, Ghislain Lartigue, Vincent Moureau, Nicolas Chauvet, Sheddia Didorally

Abstract:

International aeronautic standards demand a fire certification for engines that demonstrate their resistance. This demonstration relies on tests performed with prototype engines in the late stages of the development. Hardest tests require to place a kerosene standardized flame in front of the engine casing during a given time with imposed temperature and heat flux. The purpose of this work is to provide a better characterization of a kerosene/air certification burner in order to minimize the risks of test failure. A first Large-Eddy Simulation (LES) study of the certification burner permitted to model and simulate this burner, including both adiabatic and Conjugate Heat Transfer (CHT) computations. Carried out on unstructured grids with 40 million tetrahedral cells, using the finite-volume YALES2 code, spray combustion, forced convection on walls and conduction in the solid parts of the burner were coupled to achieve a detailed description of heat transfer. It highlighted the fact that conduction inside the solid has a real impact on the flame topology and the combustion regime. However, in the absence of radiative heat transfer, unrealistic temperature of the equipment was obtained. The aim of the present study is to include the radiative heat transfer in order to reach the same temperature given by experimental measurements. First, various test-cases are conducted to validate the coupling between the different heat solvers. Then, adiabatic case, CHT case, as well as CHT including radiative transfer are studied and compared. The LES model is finally applied to investigate the heat transfer in a flame impaction configuration. The aim is to progress on fire test modeling so as to reach a good confidence level as far as success of the certification test is concerned.

Keywords: conjugate heat transfer, fire resistance test, large-eddy simulation, radiative transfer, turbulent combustion

Procedia PDF Downloads 214
13669 The Effects of Addition of Chloride Ions on the Properties of ZnO Nanostructures Grown by Electrochemical Deposition

Authors: L. Mentar, O. Baka, A. Azizi

Abstract:

Zinc oxide as a wide band semiconductor materials, especially nanostructured materials, have potential applications in large-area such as electronics, sensors, photovoltaic cells, photonics, optical devices and optoelectronics due to their unique electrical and optical properties and surface properties. The feasibility of ZnO for these applications is due to the successful synthesis of diverse ZnO nanostructures, including nanorings, nanobows, nanohelixes, nanosprings, nanobelts, nanotubes, nanopropellers, nanodisks, and nanocombs, by different method. Among various synthesis methods, electrochemical deposition represents a simple and inexpensive solution based method for synthesis of semiconductor nanostructures. In this study, the electrodeposition method was used to produce zinc oxide (ZnO) nanostructures on fluorine-doped tin oxide (FTO)-coated conducting glass substrate as TCO from chloride bath. We present a systematic study on the effects of the concentration of chloride anion on the properties of ZnO. The influence of KCl concentrations on the electrodeposition process, morphological, structural and optical properties of ZnO nanostructures was examined. In this research electrochemical deposition of ZnO nanostructures is investigated using conventional electrochemical measurements (cyclic voltammetry and Mott-Schottky), scanning electron microscopy (SEM), and X-ray diffraction (XRD) techniques. The potentials of electrodeposition of ZnO were determined using the cyclic voltammetry. From the Mott-Schottky measurements, the flat-band potential and the donor density for the ZnO nanostructure are determined. SEM images shows different size and morphology of the nanostructures and depends greatly on the KCl concentrations. The morphology of ZnO nanostructures is determined by the corporated action between [Zn(NO3)2] and [Cl-].Very netted hexagonal grains are observed for the nanostructures deposited at 0.1M of KCl. XRD studies revealed that the all deposited films were polycrystalline in nature with wurtzite phase. The electrodeposited thin films are found to have preferred oriented along (002) plane of the wurtzite structure of ZnO with c-axis normal to the substrate surface for sample at different concentrations of KCl. UV-Visible spectra showed a significant optical transmission (~80%), which decreased with low Cl-1 concentrations. The energy band gap values have been estimated to be between 3.52 and 3.80 eV.

Keywords: electrodeposition, ZnO, chloride ions, Mott-Schottky, SEM, XRD

Procedia PDF Downloads 279
13668 Development of Advanced Linear Calibration Technique for Air Flow Sensing by Using CTA-Based Hot Wire Anemometry

Authors: Ming-Jong Tsai, T. M. Wu, R. C. Chu

Abstract:

The purpose of this study is to develop an Advanced linear calibration Technique for air flow sensing by using CTA-based Hot wire Anemometry. It contains a host PC with Human Machine Interface, a wind tunnel, a wind speed controller, an automatic data acquisition module, and nonlinear calibration model. To improve the fitting error by using single fitting polynomial, this study proposes a Multiple three-order Polynomial Fitting Method (MPFM) for fitting the non-linear output of a CTA-based Hot wire Anemometry. The CTA-based anemometer with built-in fitting parameters is installed in the wind tunnel, and the wind speed is controlled by the PC-based controller. The Hot-Wire anemometer's thermistor resistance change is converted into a voltage signal or temperature differences, and then sent to the PC through a DAQ card. After completion measurements of original signal, the Multiple polynomial mathematical coefficients can be automatically calculated, and then sent into the micro-processor in the Hot-Wire anemometer. Finally, the corrected Hot-Wire anemometer is verified for the linearity, the repeatability, error percentage, and the system outputs quality control reports.

Keywords: flow rate sensing, hot wire, constant temperature anemometry (CTA), linear calibration, multiple three-order polynomial fitting method (MPFM), temperature compensation

Procedia PDF Downloads 402
13667 Experimental Analysis of Structure Borne Noise in an Enclosure

Authors: Waziralilah N. Fathiah, A. Aminudin, U. Alyaa Hashim, T. Vikneshvaran D. Shakirah Shukor

Abstract:

This paper presents the experimental analysis conducted on a structure borne noise in a rectangular enclosure prototype made by joining of sheet aluminum metal and plywood. The study is significant as many did not realized the annoyance caused by structural borne-noise. In this study, modal analysis is carried out to seek the structure’s behaviour in order to identify the characteristics of enclosure in frequency domain ranging from 0 Hz to 200 Hz. Here, numbers of modes are identified and the characteristic of mode shape is categorized. Modal experiment is used to diagnose the structural behaviour while microphone is used to diagnose the sound. Spectral testing is performed on the enclosure. It is acoustically excited using shaker and as it vibrates, the vibrational and noise responses sensed by tri-axis accelerometer and microphone sensors are recorded respectively. Experimental works is performed on each node lies on the gridded surface of the enclosure. Both experimental measurement is carried out simultaneously. The modal experimental results of the modal modes are validated by simulation performed using MSC Nastran software. In pursuance of reducing the structure borne-noise, mitigation method is used whereby the stiffener plates are perpendicularly placed on the sheet aluminum metal. By using this method, reduction in structure borne-noise is successfully made at the end of the study.

Keywords: enclosure, modal analysis, sound analysis, structure borne-noise

Procedia PDF Downloads 418
13666 Increasing Health Education Tools Satisfaction in Nursing Staffs

Authors: Lu Yu Jyun

Abstract:

Background: Health education is important nursing work aiming to strengthen patients’ self-caring ability and family members. Our department educates through three methods, including speech education, flyer and demonstration video education. The satisfaction rate of health education tool use is 54.3% in nursing staff. The main reason is there hadn’t been a storage area for flyers, causing extra workload in assessing flyers. The satisfaction rate of health education in patients and families is 70.7%. We aim to improve this situation between 13th April and 6th June 2021. Method: We introduce the ECRS method to erase repetitive and redundant actions. We redesign the health education tool usage workflow to improve nursing staffs’ efficiency and further enhance nursing staffs care quality and working satisfaction. Result: The satisfaction rate of health education tool usage in nursing staff elevated from 54.3% to 92.5%. The satisfaction rate of health education in patients and families elevated from 70.7% to 90.2%. Conclusion: The assessment time of health care tools dropped from 10minutes to 3minutes. This significantly reduced the nursing staffs’ workload. 1213 paper is saved in one month and 14,556 a year in the estimate; we save the environment via this action. Health education map implemented in other nursing departments since October due to its’ high efficiency and makes health care tools more humanize.

Keywords: health, education tools, satisfaction, nursing staff

Procedia PDF Downloads 133
13665 Dynamic Distribution Calibration for Improved Few-Shot Image Classification

Authors: Majid Habib Khan, Jinwei Zhao, Xinhong Hei, Liu Jiedong, Rana Shahzad Noor, Muhammad Imran

Abstract:

Deep learning is increasingly employed in image classification, yet the scarcity and high cost of labeled data for training remain a challenge. Limited samples often lead to overfitting due to biased sample distribution. This paper introduces a dynamic distribution calibration method for few-shot learning. Initially, base and new class samples undergo normalization to mitigate disparate feature magnitudes. A pre-trained model then extracts feature vectors from both classes. The method dynamically selects distribution characteristics from base classes (both adjacent and remote) in the embedding space, using a threshold value approach for new class samples. Given the propensity of similar classes to share feature distributions like mean and variance, this research assumes a Gaussian distribution for feature vectors. Subsequently, distributional features of new class samples are calibrated using a corrected hyperparameter, derived from the distribution features of both adjacent and distant base classes. This calibration augments the new class sample set. The technique demonstrates significant improvements, with up to 4% accuracy gains in few-shot classification challenges, as evidenced by tests on miniImagenet and CUB datasets.

Keywords: deep learning, computer vision, image classification, few-shot learning, threshold

Procedia PDF Downloads 50
13664 Compression Index Estimation by Water Content and Liquid Limit and Void Ratio Using Statistics Method

Authors: Lizhou Chen, Abdelhamid Belgaid, Assem Elsayed, Xiaoming Yang

Abstract:

Compression index is essential in foundation settlement calculation. The traditional method for determining compression index is consolidation test which is expensive and time consuming. Many researchers have used regression methods to develop empirical equations for predicting compression index from soil properties. Based on a large number of compression index data collected from consolidation tests, the accuracy of some popularly empirical equations were assessed. It was found that primary compression index is significantly overestimated in some equations while it is underestimated in others. The sensitivity analyses of soil parameters including water content, liquid limit and void ratio were performed. The results indicate that the compression index obtained from void ratio is most accurate. The ANOVA (analysis of variance) demonstrates that the equations with multiple soil parameters cannot provide better predictions than the equations with single soil parameter. In other words, it is not necessary to develop the relationships between compression index and multiple soil parameters. Meanwhile, it was noted that secondary compression index is approximately 0.7-5.0% of primary compression index with an average of 2.0%. In the end, the proposed prediction equations using power regression technique were provided that can provide more accurate predictions than those from existing equations.

Keywords: compression index, clay, settlement, consolidation, secondary compression index, soil parameter

Procedia PDF Downloads 149
13663 Traditional Drawing, BIM and Erudite Design Process

Authors: Maryam Kalkatechi

Abstract:

Nowadays, parametric design, scientific analysis, and digital fabrication are dominant. Many architectural practices are increasingly seeking to incorporate advanced digital software and fabrication in their projects. Proposing an erudite design process that combines digital and practical aspects in a strong frame within the method was resulted from the dissertation research. The digital aspects are the progressive advancements in algorithm design and simulation software. These aspects have assisted the firms to develop more holistic concepts at the early stage and maintain collaboration among disciplines during the design process. The erudite design process enhances the current design processes by encouraging the designer to implement the construction and architecture knowledge within the algorithm to make successful design processes. The erudite design process also involves the ongoing improvements of applying the new method of 3D printing in construction. This is achieved through the ‘data-sketches’. The term ‘data-sketch’ was developed by the author in the dissertation that was recently completed. It accommodates the decisions of the architect on the algorithm. This paper introduces the erudite design process and its components. It will summarize the application of this process in development of the ‘3D printed construction unit’. This paper contributes to overlaying the academic and practice with advanced technology by presenting a design process that transfers the dominance of tool to the learned architect and encourages innovation in design processes.

Keywords: erudite, data-sketch, algorithm design in architecture, design process

Procedia PDF Downloads 263
13662 A Simple Computational Method for the Gravitational and Seismic Soil-Structure-Interaction between New and Existent Buildings Sites

Authors: Nicolae Daniel Stoica, Ion Mierlus Mazilu

Abstract:

This work is one of numerical research and aims to address the issue of the design of new buildings in a 3D location of existing buildings. In today's continuous development and congestion of urban centers is a big question about the influence of the new buildings on an already existent vicinity site. Thus, in this study, we tried to focus on how existent buildings may be affected by any newly constructed buildings and in how far this influence is really decreased. The problem of modeling the influence of interaction between buildings is not simple in any area in the world, and neither in Romania. Unfortunately, most often the designers not done calculations that can determine how close to reality these 3D influences nor the simplified method and the more superior methods. In the most literature making a "shield" (the pilots or molded walls) is absolutely sufficient to stop the influence between the buildings, and so often the soil under the structure is ignored in the calculation models. The main causes for which the soil is neglected in the analysis are related to the complexity modeling of interaction between soil and structure. In this paper, based on a new simple but efficient methodology we tried to determine for a lot of study cases the influence, in terms of assessing the interaction land structure on the behavior of structures that influence a new building on an existing one. The study covers additional subsidence that may occur during the execution of new works and after its completion. It also highlighted the efforts diagrams and deflections in the soil for both the original case and the final stage. This is necessary to see to what extent the expected impact of the new building on existing areas.

Keywords: soil, structure, interaction, piles, earthquakes

Procedia PDF Downloads 283
13661 The Impact of E-Commerce on the Physical Space of Traditional Retail System

Authors: Sumayya S.

Abstract:

Making cities adaptive and inclusive is one among the inherent goal and challenge for contemporary cities. This is a serious concern when the urban transformations occur in varying magnitude due to visible and invisible factors. One type of visibly invisible factor is ecommerce and its expanding operation that is understood to cause changes to the conventional spatial structure positively and negatively. With the continued growth in e-commerce activities and its future potential, market analysts, media, and even retailers have questioned the importance of a future presence of traditional Brick-and-mortar stores in cities as a critical element, with some even referring to the repeated announcement of the closure of some store chains as the end of the online shopping era. Essentially this raises the question of how adaptive and inclusive the cities are to the dynamics of transformative changes that are often unseen. People have become more comfortable with seating inside and door delivery systems, and this increased change in usage of public spaces, especially the commercial corridors. Through this research helped in presetting a new approach for planning and designing commercial activities centers and also presents the impact of ecommerce on the urban fabric, such as division and fragmentation of space, showroom syndrome, reconceptualization of space, etc., in a critical way. The changes are understood by analyzing the e-commerce logistic process. Based on the inferences reach at the conclusion for the need of an integrated approach in the field of planning and designing of public spaces for the sustainable omnichannel retailing. This study was carried out with the following objectives Monitoring the impact of e commerce on the traditional shopping space. Explore the new challenges and opportunities faced by the urban form. Explore how adaptive and inclusive our cities are to the dynamics of transformative changes caused by ecommerce.

Keywords: E-commerce, shopping streets, online environment, offline environment, shopping factors

Procedia PDF Downloads 70
13660 Study of Climate Change Process on Hyrcanian Forests Using Dendroclimatology Indicators (Case Study of Guilan Province)

Authors: Farzad Shirzad, Bohlol Alijani, Mehry Akbary, Mohammad Saligheh

Abstract:

Climate change and global warming are very important issues today. The process of climate change, especially changes in temperature and precipitation, is the most important issue in the environmental sciences. Climate change means changing the averages in the long run. Iran is located in arid and semi-arid regions due to its proximity to the equator and its location in the subtropical high pressure zone. In this respect, the Hyrcanian forest is a green necklace between the Caspian Sea and the south of the Alborz mountain range. In the forty-third session of UNESCO, it was registered as the second natural heritage of Iran. Beech is one of the most important tree species and the most industrial species of Hyrcanian forests. In this research, using dendroclimatology, the width of the tree ring, and climatic data of temperature and precipitation from Shanderman meteorological station located in the study area, And non-parametric Mann-Kendall statistical method to investigate the trend of climate change over a time series of 202 years of growth ringsAnd Pearson statistical method was used to correlate the growth of "ring" growth rings of beech trees with climatic variables in the region. The results obtained from the time series of beech growth rings showed that the changes in beech growth rings had a downward and negative trend and were significant at the level of 5% and climate change occurred. The average minimum, medium, and maximum temperatures and evaporation in the growing season had an increasing trend, and the annual precipitation had a decreasing trend. Using Pearson method during fitting the correlation of diameter of growth rings with temperature, for the average in July, August, and September, the correlation is negative, and the average temperature in July, August, and September is negative, and for the average The average maximum temperature in February was correlation-positive and at the level of 95% was significant, and with precipitation, in June the correlation was at the level of 95% positive and significant.

Keywords: climate change, dendroclimatology, hyrcanian forest, beech

Procedia PDF Downloads 88
13659 Nostalgic Tourism in Macau: The Bidirectional Causal Relationship between Destination Image and Experiential Value

Authors: Aliana Leong, T. C. Huan

Abstract:

The purpose of Nostalgic themed tourism product is becoming popular in many countries. This study intends to investigate the role of nostalgia in destination image, experiential value and their effect on subsequent behavioral intention. The survey used stratified sampling method to include respondents from all the nearby Asian regions. The sampling is based on the data of inbound tourists provided by the Statistics and Census Service (DSEC) of government of Macau. The questionnaire consisted of five sections of 5 point Likert scale questions: (1) nostalgia, (2) destination image both before and after experience, (3) expected value, (4) experiential value, and (5) future visit intention. Data was analysed with structural equation modelling. The result indicates that nostalgia plays an important part in forming destination image and experiential value before individual had a chance to experience the destination. The destination image and experiential value share a bidirectional causal relationship that eventually contributes to future visit intention. The study also discovered that while experiential value is more effective in generating destination image, the later contribute more to future visit intention. The research design measures destination image and experiential value before and after respondents had experience the destination. The distinction between destination image and expected/experiential value can be examined because the longitudinal design of research method. It also allows this study to observe how nostalgia translates to future visit intention.

Keywords: nostalgia, destination image, experiential value, future visit intention

Procedia PDF Downloads 380
13658 Optimization of Cutting Parameters on Delamination Using Taguchi Method during Drilling of GFRP Composites

Authors: Vimanyu Chadha, Ranganath M. Singari

Abstract:

Drilling composite materials is a frequently practiced machining process during assembling in various industries such as automotive and aerospace. However, drilling of glass fiber reinforced plastic (GFRP) composites is significantly affected by damage tendency of these materials under cutting forces such as thrust force and torque. The aim of this paper is to investigate the influence of the various cutting parameters such as cutting speed and feed rate; subsequently also to study the influence of number of layers on delamination produced while drilling a GFRP composite. A plan of experiments, based on Taguchi techniques, was instituted considering drilling with prefixed cutting parameters in a hand lay-up GFRP material. The damage induced associated with drilling GFRP composites were measured. Moreover, Analysis of Variance (ANOVA) was performed to obtain minimization of delamination influenced by drilling parameters and number layers. The optimum drilling factor combination was obtained by using the analysis of signal-to-noise ratio. The conclusion revealed that feed rate was the most influential factor on the delamination. The best results of the delamination were obtained with composites with a greater number of layers at lower cutting speeds and feed rates.

Keywords: analysis of variance, delamination, design optimization, drilling, glass fiber reinforced plastic composites, Taguchi method

Procedia PDF Downloads 244
13657 A Machine Learning Framework Based on Biometric Measurements for Automatic Fetal Head Anomalies Diagnosis in Ultrasound Images

Authors: Hanene Sahli, Aymen Mouelhi, Marwa Hajji, Amine Ben Slama, Mounir Sayadi, Farhat Fnaiech, Radhwane Rachdi

Abstract:

Fetal abnormality is still a public health problem of interest to both mother and baby. Head defect is one of the most high-risk fetal deformities. Fetal head categorization is a sensitive task that needs a massive attention from neurological experts. In this sense, biometrical measurements can be extracted by gynecologist doctors and compared with ground truth charts to identify normal or abnormal growth. The fetal head biometric measurements such as Biparietal Diameter (BPD), Occipito-Frontal Diameter (OFD) and Head Circumference (HC) needs to be monitored, and expert should carry out its manual delineations. This work proposes a new approach to automatically compute BPD, OFD and HC based on morphological characteristics extracted from head shape. Hence, the studied data selected at the same Gestational Age (GA) from the fetal Ultrasound images (US) are classified into two categories: Normal and abnormal. The abnormal subjects include hydrocephalus, microcephaly and dolichocephaly anomalies. By the use of a support vector machines (SVM) method, this study achieved high classification for automated detection of anomalies. The proposed method is promising although it doesn't need expert interventions.

Keywords: biometric measurements, fetal head malformations, machine learning methods, US images

Procedia PDF Downloads 277
13656 Understanding the Interactive Nature in Auditory Recognition of Phonological/Grammatical/Semantic Errors at the Sentence Level: An Investigation Based upon Japanese EFL Learners’ Self-Evaluation and Actual Language Performance

Authors: Hirokatsu Kawashima

Abstract:

One important element of teaching/learning listening is intensive listening such as listening for precise sounds, words, grammatical, and semantic units. Several classroom-based investigations have been conducted to explore the usefulness of auditory recognition of phonological, grammatical and semantic errors in such a context. The current study reports the results of one such investigation, which targeted auditory recognition of phonological, grammatical, and semantic errors at the sentence level. 56 Japanese EFL learners participated in this investigation, in which their recognition performance of phonological, grammatical and semantic errors was measured on a 9-point scale by learners’ self-evaluation from the perspective of 1) two types of similar English sound (vowel and consonant minimal pair words), 2) two types of sentence word order (verb phrase-based and noun phrase-based word orders), and 3) two types of semantic consistency (verb-purpose and verb-place agreements), respectively, and their general listening proficiency was examined using standardized tests. A number of findings have been made about the interactive relationships between the three types of auditory error recognition and general listening proficiency. Analyses based on the OPLS (Orthogonal Projections to Latent Structure) regression model have disclosed, for example, that the three types of auditory error recognition are linked in a non-linear way: the highest explanatory power for general listening proficiency may be attained when quadratic interactions between auditory recognition of errors related to vowel minimal pair words and that of errors related to noun phrase-based word order are embraced (R2=.33, p=.01).

Keywords: auditory error recognition, intensive listening, interaction, investigation

Procedia PDF Downloads 499
13655 Numerical Investigation of Turbulent Flow Control by Suction and Injection on a Subsonic NACA23012 Airfoil by Proper Orthogonal Decomposition Analysis and Perturbed Reynolds Averaged Navier‐Stokes Equations

Authors: Azam Zare

Abstract:

Separation flow control for performance enhancement over airfoils at high incidence angle has become an increasingly important topic. This work details the characteristics of an efficient feedback control of the turbulent subsonic flow over NACA23012 airfoil using forced reduced‐order model based on the proper orthogonal decomposition/Galerkin projection and perturbation method on the compressible Reynolds Averaged Navier‐Stokes equations. The forced reduced‐order model is used in the optimal control of the turbulent separated flow over a NACA23012 airfoil at Mach number of 0.2, Reynolds number of 5×106, and high incidence angle of 24° using blowing/suction controlling jets. The Spallart-Almaras turbulence model is implemented for high Reynolds number calculations. The main shortcoming of the POD/Galerkin projection on flow equations for controlling purposes is that the blowing/suction controlling jet velocity does not show up explicitly in the resulting reduced order model. Combining perturbation method and POD/Galerkin projection on flow equations introduce a forced reduced‐order model that can predict the time-varying influence of the blowing/suction controlling jet velocity. An optimal control theory based on forced reduced‐order system is used to design a control law for a nonlinear reduced‐order model, which attempts to minimize the vorticity content in the turbulent flow field over NACA23012 airfoil. Numerical simulations were performed to help understand the behavior of the controlled suction jet at 12% to 18% chord from leading edge and a pair of blowing/suction jets at 15% to 18% and 24% to 30% chord from leading edge, respectively. Analysis of streamline profiles indicates that the blowing/suction jets are efficient in removing separation bubbles and increasing the lift coefficient up to 22%, while the perturbation method can predict the flow field in an accurate Manner.

Keywords: flow control, POD, Galerkin projection, separation

Procedia PDF Downloads 140
13654 Beta-Carotene Attenuates Cognitive and Hepatic Impairment in Thioacetamide-Induced Rat Model of Hepatic Encephalopathy via Mitigation of MAPK/NF-κB Signaling Pathway

Authors: Marawan Abd Elbaset Mohamed, Hanan A. Ogaly, Rehab F. Abdel-Rahman, Ahmed-Farid O.A., Marwa S. Khattab, Reham M. Abd-Elsalam

Abstract:

Liver fibrosis is a severe worldwide health concern due to various chronic liver disorders. Hepatic encephalopathy (HE) is one of its most common complications affecting liver and brain cognitive function. Beta-Carotene (B-Car) is an organic, strongly colored red-orange pigment abundant in fungi, plants, and fruits. The study attempted to know B-Car neuroprotective potential against thioacetamide (TAA)-induced neurotoxicity and cognitive decline in HE in rats. Hepatic encephalopathy was induced by TAA (100 mg/kg, i.p.) three times per week for two weeks. B-Car was given orally (10 or 20 mg/kg) daily for two weeks after TAA injections. Organ body weight ratio, Serum transaminase activities, liver’s antioxidant parameters, ammonia, and liver histopathology were assessed. Also, the brain’s mitogen-activated protein kinase (MAPK), nuclear factor kappa B (NF-κB), antioxidant parameters, adenosine triphosphate (ATP), adenosine monophosphate (AMP), norepinephrine (NE), dopamine (DA), serotonin (5-HT), 5-hydroxyindoleacetic acid (5-HIAA) cAMP response element-binding protein (CREB) expression and B-cell lymphoma 2 (Bcl-2) expression were measured. The brain’s cognitive functions (Spontaneous locomotor activity, Rotarod performance test, Object recognition test) were assessed. B-Car prevented alteration of the brain’s cognitive function in a dose-dependent manner. The histopathological outcomes supported these biochemical evidences. Based on these results, it could be established that B-Car could be assigned to treat the brain’s neurotoxicity consequences of HE via downregualtion of MAPK/NF-κB signaling pathways.

Keywords: beta-carotene, liver injury, MAPK, NF-κB, rat, thioacetamide

Procedia PDF Downloads 145
13653 A Neural Network Approach to Understanding Turbulent Jet Formations

Authors: Nurul Bin Ibrahim

Abstract:

Advancements in neural networks have offered valuable insights into Fluid Dynamics, notably in addressing turbulence-related challenges. In this research, we introduce multiple applications of models of neural networks, namely Feed-Forward and Recurrent Neural Networks, to explore the relationship between jet formations and stratified turbulence within stochastically excited Boussinesq systems. Using machine learning tools like TensorFlow and PyTorch, the study has created models that effectively mimic and show the underlying features of the complex patterns of jet formation and stratified turbulence. These models do more than just help us understand these patterns; they also offer a faster way to solve problems in stochastic systems, improving upon traditional numerical techniques to solve stochastic differential equations such as the Euler-Maruyama method. In addition, the research includes a thorough comparison with the Statistical State Dynamics (SSD) approach, which is a well-established method for studying chaotic systems. This comparison helps evaluate how well neural networks can help us understand the complex relationship between jet formations and stratified turbulence. The results of this study underscore the potential of neural networks in computational physics and fluid dynamics, opening up new possibilities for more efficient and accurate simulations in these fields.

Keywords: neural networks, machine learning, computational fluid dynamics, stochastic systems, simulation, stratified turbulence

Procedia PDF Downloads 54