Search results for: coefficient of consolidation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2491

Search results for: coefficient of consolidation

181 Towards Accurate Velocity Profile Models in Turbulent Open-Channel Flows: Improved Eddy Viscosity Formulation

Authors: W. Meron Mebrahtu, R. Absi

Abstract:

Velocity distribution in turbulent open-channel flows is organized in a complex manner. This is due to the large spatial and temporal variability of fluid motion resulting from the free-surface turbulent flow condition. This phenomenon is complicated further due to the complex geometry of channels and the presence of solids transported. Thus, several efforts were made to understand the phenomenon and obtain accurate mathematical models that are suitable for engineering applications. However, predictions are inaccurate because oversimplified assumptions are involved in modeling this complex phenomenon. Therefore, the aim of this work is to study velocity distribution profiles and obtain simple, more accurate, and predictive mathematical models. Particular focus will be made on the acceptable simplification of the general transport equations and an accurate representation of eddy viscosity. Wide rectangular open-channel seems suitable to begin the study; other assumptions are smooth-wall, and sediment-free flow under steady and uniform flow conditions. These assumptions will allow examining the effect of the bottom wall and the free surface only, which is a necessary step before dealing with more complex flow scenarios. For this flow condition, two ordinary differential equations are obtained for velocity profiles; from the Reynolds-averaged Navier-Stokes (RANS) equation and equilibrium consideration between turbulent kinetic energy (TKE) production and dissipation. Then different analytic models for eddy viscosity, TKE, and mixing length were assessed. Computation results for velocity profiles were compared to experimental data for different flow conditions and the well-known linear, log, and log-wake laws. Results show that the model based on the RANS equation provides more accurate velocity profiles. In the viscous sublayer and buffer layer, the method based on Prandtl’s eddy viscosity model and Van Driest mixing length give a more precise result. For the log layer and outer region, a mixing length equation derived from Von Karman’s similarity hypothesis provides the best agreement with measured data except near the free surface where an additional correction based on a damping function for eddy viscosity is used. This method allows more accurate velocity profiles with the same value of the damping coefficient that is valid under different flow conditions. This work continues with investigating narrow channels, complex geometries, and the effect of solids transported in sewers.

Keywords: accuracy, eddy viscosity, sewers, velocity profile

Procedia PDF Downloads 112
180 Construction and Validation of Allied Bank-Teller Aptitude Test

Authors: Muhammad Kashif Fida

Abstract:

In the bank, teller’s job (cash officer) is highly important and critical as at one end it requires soft and brisk customer services and on the other side, handling cash with integrity. It is always challenging for recruiters to hire competent and trustworthy tellers. According to author’s knowledge, there is no comprehensive test available that may provide assistance in recruitment in Pakistan. So there is a dire need of a psychometric battery that could provide support in recruitment of potential candidates for the teller’ position. So, the aim of the present study was to construct ABL-Teller Aptitude Test (ABL-TApT). Three major phases have been designed by following American Psychological Association’s guidelines. The first phase was qualitative, indicators of the test have been explored by content analysis of the a) teller’s job descriptions (n=3), b) interview with senior tellers (n=6) and c) interview with HR personals (n=4). Content analysis of above yielded three border constructs; i). Personality, ii). Integrity/honesty, iii). Professional Work Aptitude. Identified indicators operationalized and statements (k=170) were generated using verbatim. It was then forwarded to the five experts for review of content validity. They finalized 156 items. In the second phase; ABL-TApT (k=156) administered on 323 participants through a computer application. The overall reliability of the test shows significant alpha coefficient (α=.81). Reliability of subscales have also significant alpha coefficients. Confirmatory Factor Analysis (CFA) performed to estimate the construct validity, confirms four main factors comprising of eight personality traits (Confidence, Organized, Compliance, Goal-oriented, Persistent, Forecasting, Patience, Caution), one Integrity/honesty factor, four factors of professional work aptitude (basic numerical ability and perceptual accuracy of letters, numbers and signature) and two factors for customer services (customer services, emotional maturity). Values of GFI, AGFI, NNFI, CFI, RFI and RMSEA are in recommended range depicting significant model fit. In third phase concurrent validity evidences have been pursued. Personality and integrity part of this scale has significant correlations with ‘conscientiousness’ factor of NEO-PI-R, reflecting strong concurrent validity. Customer services and emotional maturity have significant correlations with ‘Bar-On EQI’ showing another evidence of strong concurrent validity. It is concluded that ABL-TAPT is significantly reliable and valid battery of tests, will assist in objective recruitment of tellers and help recruiters in finding a more suitable human resource.

Keywords: concurrent validity, construct validity, content validity, reliability, teller aptitude test, objective recruitment

Procedia PDF Downloads 226
179 Density Interaction in Determinate and Indeterminate Faba Bean Types

Authors: M. Abd El Hamid Ezzat

Abstract:

Two field trials were conducted to study the effect of plant densities i.e., 190, 222, 266, 330 and 440 10³ plants ha⁻¹ on morphological characters, physiological and yield attributes of two faba bean types viz. determinate (FLIP-87 -117 strain) and indeterminate (c.v. Giza-461). The results showed that the indeterminate plants significantly surpassed the determinate plants in plant height at 75 and 90 days from sowing, number of leaves at all growth stages and dry matter accumulation at 45 and 90 days from sowing. Determinate plants possessed greater number of side branches than that of the indeterminate plants, but it was only significant at 90 days from sowing. Greater number of flowers were produced by the indeterminate plants than that of the determinate plants at 75 and 90 days from sowing, and although shedding was obvious in both types, it was greater in the determinate plants as compared with the indeterminate one at 90 days from sowing. Increasing plant density resulted in reductions in number of leaves, branches flowers and dry matter accumulation per plant of both faba bean types. However, plant height criteria took a reversible magnitude. Moreover, under all rates of plant densities the indeterminate type plants surpassed the determinate plants in all growth characters studied except for number of branches per plant at 90 days from sowing. The indeterminate plant leaves significantly contained greater concentrations of photosynthetic pigments i.e., chl. a, b and carotenoids than those found in the determinate plant leaves. Also, the data showed significant reduction in photosynthetic pigments concentration as planting density increases. Light extinction coefficient (K) values reached their maximum level at 60 days from sowing, then it declined sharply at 75 days from sowing. The data showed that the illumination inside the determinate faba bean canopies was better than the indeterminate plants. (K) values tended to increase as planting density increases, meanwhile, significant interactions were reported between faba bean type as planting density on (K) at all growth stages. Both of determinate and indeterminate faba bean plant leaves reached their maximum expansion at 75 days from sowing reflecting the highest LAI values, then their declined in the subsequent growth stage. The indeterminate faba bean plants significantly surpassed the determinate plants in LAI up to 75 days from sowing. Growth analysis showed that NAR, RGR and CGR reached their maximum rates at (60-75 days growth stage). Faba bean types did not differ significantly in NAR at the early growth stage. The indeterminate plants were able to grow faster with significant CGR values than the determinate plants. The indeterminate faba bean plants surpassed the determinate ones in number of seeds/pod and per plant, 100-seed weight, seed yield per plant and per hectare at all rates of plant density. Seed yield increased with increasing plant densities of both types. The highest seed yield was attained for both types 440 103 plants ha⁻¹.

Keywords: determinate, indeterminate faba bean, Physiological attributes, yield attributes

Procedia PDF Downloads 238
178 MCD-017: Potential Candidate from the Class of Nitroimidazoles to Treat Tuberculosis

Authors: Gurleen Kour, Mowkshi Khullar, B. K. Chandan, Parvinder Pal Singh, Kushalava Reddy Yumpalla, Gurunadham Munagala, Ram A. Vishwakarma, Zabeer Ahmed

Abstract:

New chemotherapeutic compounds against multidrug-resistant Mycobacterium tuberculosis (Mtb) are urgently needed to combat drug resistance in tuberculosis (TB). Apart from in-vitro potency against the target, physiochemical properties and pharmacokinetic properties play an imperative role in the process of drug discovery. We have identified novel nitroimidazole derivatives with potential activity against mycobacterium tuberculosis. One lead candidates, MCD-017, which showed potent activity against H37Rv strain (MIC=0.5µg/ml) and was further evaluated in the process of drug development. Methods: Basic physicochemical parameters like solubility and lipophilicity (LogP) were evaluated. Thermodynamic solubility was determined in PBS buffer (pH 7.4) using LC/MS-MS. The partition coefficient (Log P) of the compound was determined between octanol and phosphate buffered saline (PBS at pH 7.4) at 25°C by the microscale shake flask method. The compound followed Lipinski’s rule of five, which is predictive of good oral bioavailability and was further evaluated for metabolic stability. In-vitro metabolic stability was determined in rat liver microsomes. The hepatotoxicity of the compound was also determined in HepG2 cell line. In vivo pharmacokinetic profile of the compound after oral dosing was also obtained using balb/c mice. Results: The compound exhibited favorable solubility and lipophilicity. The physical and chemical properties of the compound were made use of as the first determination of drug-like properties. The compound obeyed Lipinski’s rule of five, with molecular weight < 500, number of hydrogen bond donors (HBD) < 5 and number of hydrogen bond acceptors(HBA) not more then 10. The log P of the compound was less than 5 and therefore the compound is predictive of exhibiting good absorption and permeation. Pooled rat liver microsomes were prepared from rat liver homogenate for measuring the metabolic stability. 99% of the compound was not metabolized and remained intact. The compound did not exhibit cytoxicity in hepG2 cells upto 40 µg/ml. The compound revealed good pharmacokinetic profile at a dose of 5mg/kg administered orally with a half life (t1/2) of 1.15 hours, Cmax of 642ng/ml, clearance of 4.84 ml/min/kg and a volume of distribution of 8.05 l/kg. Conclusion : The emergence of multi drug resistance (MDR) and extensively drug resistant (XDR) Tuberculosis emphasize the requirement of novel drugs active against tuberculosis. Thus, the need to evaluate physicochemical and pharmacokinetic properties in the early stages of drug discovery is required to reduce the attrition associated with poor drug exposure. In summary, it can be concluded that MCD-017 may be considered a good candidate for further preclinical and clinical evaluations.

Keywords: mycobacterium tuberculosis, pharmacokinetics, physicochemical properties, hepatotoxicity

Procedia PDF Downloads 457
177 3D-Printing of Waveguide Terminations: Effect of Material Shape and Structuring on Their Characteristics

Authors: Lana Damaj, Vincent Laur, Azar Maalouf, Alexis Chevalier

Abstract:

Matched termination is an important part of the passive waveguide components. It is typically used at the end of a waveguide transmission line to prevent reflections and improve signal quality. Waveguide terminations (loads) are commonly used in microwave and RF applications. In traditional microwave architectures, usually, waveguide termination consists of a standard rectangular waveguide made by a lossy resistive material, and ended by shorting metallic plate. These types of terminations are used, to dissipate the energy as heat. However, these terminations may increase the size and the weight of the overall system. New alternative solution consists in developing terminations based on 3D-printing of materials. Designing such terminations is very challenging since it should meet the requirements imposed by the system. These requirements include many parameters such as the absorption, the power handling capability in addition to the cost, the size and the weight that have to be minimized. 3D-printing is a shaping process that enables the production of complex geometries. It allows to find best compromise between requirements. In this paper, a comparison study has been made between different existing and new shapes of waveguide terminations. Indeed, 3D printing of absorbers makes it possible to study not only standard shapes (wedge, pyramid, tongue) but also more complex topologies such as exponential ones. These shapes have been designed and simulated using CST MWS®. The loads have been printed using the carbon-filled PolyLactic Acid, conductive PLA from ProtoPasta. Since the terminations has been characterized in the X-band (from 8GHz to 12GHz), the rectangular waveguide standard WR-90 has been selected. The classical wedge shape has been used as a reference. First, all loads have been simulated with the same length and two parameters have been compared: the absorption level (level of |S11|) and the dissipated power density. This study shows that the concave exponential pyramidal shape has the better absorption level and the convex exponential pyramidal shape has the better dissipated power density level. These two loads have been printed in order to measure their properties. A good agreement between the simulated and measured reflection coefficient has been obtained. Furthermore, a study of material structuring based on the honeycomb hexagonal structure has been investigated in order to vary the effective properties. In the final paper, the detailed methodology and the simulated and measured results will be presented in order to show how 3D-printing can allow controlling mass, weight, absorption level and power behaviour.

Keywords: additive manufacturing, electromagnetic composite materials, microwave measurements, passive components, power handling capacity (PHC), 3D-printing

Procedia PDF Downloads 22
176 Welfare Dynamics and Food Prices' Changes: Evidence from Landholding Groups in Rural Pakistan

Authors: Lubna Naz, Munir Ahmad, G. M. Arif

Abstract:

This study analyzes static and dynamic welfare impacts of food price changes for various landholding groups in Pakistan. The study uses three classifications of land ownership, landless, small landowners and large landowners, for analysis. The study uses Panel Survey, Pakistan Rural Household Survey (PRHS) of Pakistan Institute of Development Economics Islamabad, of rural households from two largest provinces (Sindh and Punjab) of Pakistan. The study uses all three waves (2001, 2004 and 2010) of PRHS. This research work makes three important contributions in literature. First, this study uses Quadratic Almost Ideal Demand System (QUAIDS) to estimate demand functions for eight food groups-cereals, meat, milk and milk products, vegetables, cooking oil, pulses and other food. The study estimates food demand functions with Nonlinear Seemingly Unrelated (NLSUR), and employs Lagrange Multiplier and test on the coefficient of squared expenditure term to determine inclusion of squared expenditure term. Test results support the inclusion of squared expenditure term in the food demand model for each of landholding groups (landless, small landowners and large landowners). This study tests for endogeneity and uses control function for its correction. The problem of observed zero expenditure is dealt with a two-step procedure. Second, it creates low price and high price periods, based on literature review. It uses elasticity coefficients from QUAIDS to analyze static and dynamic welfare effects (first and second order Tylor approximation of expenditure function is used) of food price changes across periods. The study estimates compensation variation (CV), money metric loss from food price changes, for landless, small and large landowners. Third, this study compares the findings on welfare implications of food price changes based on QUAIDS with the earlier research in Pakistan, which used other specification of the demand system. The findings indicate that dynamic welfare impacts of food price changes are lower as compared to static welfare impacts for all landholding groups. The static and dynamic welfare impacts of food price changes are highest for landless. The study suggests that government should extend social security nets to landless poor and categorically to vulnerable landless (without livestock) to redress the short-term impact of food price increase. In addition, the government should stabilize food prices and particularly cereal prices in the long- run.

Keywords: QUAIDS, Lagrange multiplier, NLSUR, and Tylor approximation

Procedia PDF Downloads 365
175 Occult Haemolacria Paradigm in the Study of Tears

Authors: Yuliya Huseva

Abstract:

To investigate the contents of tears to determine latent blood. Methods: Tear samples from 72 women were studied with the microscopy of tears aspirated with a capillary and stained by Nocht and with a chemical method of test strips with chromogen. Statistical data processing was carried out using statistical packages Statistica 10.0 for Windows, calculation of Pearson's chi-square test, Yule association coefficient, the method of determining sensitivity and specificity. Results:, In 30.6% (22) of tear samples erythrocytes were revealed microscopically. Correlations between the presence of erythrocytes in the tear and the phase of the menstrual cycle has been discovered. In the follicular phase of the cycle, erythrocytes were found in 59.1% (13) people, which is significantly more (x2=4.2, p=0.041) compared to the luteal phase - in 40.9% (9) women. In the first seven days of the follicular phase of the menstrual cycle the erythrocytes were predominanted of in the tears of women examined testifies in favour of the vicarious bleeding from the mucous membranes of extragenital organs in sync with menstruation. Of the other cellular elements in tear samples with latent haemolacria, neutrophils prevailed - in 45.5% (10), while lymphocytes were less common - in 27.3% (6), because neutrophil exudation is accompanied by vasodilatation of the conjunctiva and the release of erythrocytes into the conjunctival cavity. It was found that the prognostic significance of the chemical method was 0.53 of the microscopic method. In contrast to microscopy, which detected blood in tear samples from 30.6% (22) of women, blood was detected chemically in tears of 16.7% (12). An association between latent haemolacria and endometriosis was found (k=0.75, p≤0.05). Microscopically, in the tears of patients with endometriosis, erythrocytes were detected in 70% of cases, while in healthy women without endometriosis - in 25% of cases. The proportion of women with erythrocytes in tears, determined by a chemical method, was 41.7% among patients with endometriosis, which is significantly more (x2=6.5, p=0.011) than 11.7% among women without endometriosis. The data obtained can be explained by the etiopathogenesis of the extragenital endometriosis which is caused by hematogenous spread of endometrial tissue into the orbit. In endometriosis, erythrocytes are found against the background of accumulations of epithelial cells. In the tear samples of 4 women with endometriosis, glandular cuboidal epithelial cells, morphologically similar to endometrial cells, were found, which may indicate a generalization of the disease. Conclusions: Single erythrocytes can normally be found in the tears, their number depends on the phase of the menstrual cycle, increasing in the follicular phase. Erythrocytes found in tears against the background of accumulations of epitheliocytes and their glandular atypia may indicate a manifestation of extragenital endometriosis. Both used methods (microscopic and chemical) are informative in revealing latent haemolacria. The microscopic method is more sensitive, reveals intact erythrocytes, and besides, it provides information about other cells. At the same time, the chemical method is faster and technically simpler, it determines the presence of haemoglobin and its metabolic products, and can be used as a screening.

Keywords: tear, blood, microscopy, epitheliocytes

Procedia PDF Downloads 121
174 Evolution and Merging of Double-Diffusive Layers in a Vertically Stable Compositional Field

Authors: Ila Thakur, Atul Srivastava, Shyamprasad Karagadde

Abstract:

The phenomenon of double-diffusive convection is driven by density gradients created by two different components (e.g., temperature and concentration) having different molecular diffusivities. The evolution of horizontal double-diffusive layers (DDLs) is one of the outcomes of double-diffusive convection occurring in a laterally/vertically cooled rectangular cavity having a pre-existing vertically stable composition field. The present work mainly focuses on different characteristics of the formation and merging of double-diffusive layers by imposing lateral/vertical thermal gradients in a vertically stable compositional field. A CFD-based twodimensional fluent model has been developed for the investigation of the aforesaid phenomena. The configuration containing vertical thermal gradients shows the evolution and merging of DDLs, where, elements from the same horizontal plane move vertically and mix with surroundings, creating a horizontal layer. In the configuration of lateral thermal gradients, a specially oriented convective roll was found inside each DDL and each roll was driven by the competing density change due to the already existing composition field and imposed thermal field. When the thermal boundary layer near the vertical wall penetrates the salinity interface, it can disrupt the compositional interface and can lead to layer merging. Different analytical scales were quantified and compared for both configurations. Various combinations of solutal and thermal Rayleigh numbers were investigated to get three different regimes, namely; stagnant regime, layered regime and unicellular regime. For a particular solutal Rayleigh number, a layered structure can originate only for a range of thermal Rayleigh numbers. Lower thermal Rayleigh numbers correspond to a diffusion-dominated stagnant regime. Very high thermal Rayleigh corresponds to a unicellular regime with high convective mixing. Different plots identifying these three regimes, number, thickness and time of existence of DDLs have been studied and plotted. For a given solutal Rayleigh number, an increase in thermal Rayleigh number increases the width but decreases both the number and time of existence of DDLs in the fluid domain. Sudden peaks in the velocity and heat transfer coefficient have also been observed and discussed at the time of merging. The present study is expected to be useful in correlating the double-diffusive convection in many large-scale applications including oceanography, metallurgy, geology, etc. The model has also been developed for three-dimensional geometry, but the results were quite similar to that of 2-D simulations.

Keywords: double diffusive layers, natural convection, Rayleigh number, thermal gradients, compositional gradients

Procedia PDF Downloads 85
173 Tribological Behaviour of the Degradation Process of Additive Manufactured Stainless Steel 316L

Authors: Yunhan Zhang, Xiaopeng Li, Zhongxiao Peng

Abstract:

Additive manufacturing (AM) possesses several key characteristics, including high design freedom, energy-efficient manufacturing process, reduced material waste, high resolution of finished products, and excellent performance of finished products. These advantages have garnered widespread attention and fueled rapid development in recent decades. AM has significantly broadened the spectrum of available materials in the manufacturing industry and is gradually replacing some traditionally manufactured parts. Similar to components produced via traditional methods, products manufactured through AM are susceptible to degradation caused by wear during their service life. Given the prevalence of 316L stainless steel (SS) parts and the limited research on the tribological behavior of 316L SS samples or products fabricated using AM technology, this study aims to investigate the degradation process and wear mechanisms of 316L SS disks fabricated using AM technology. The wear mechanisms and tribological performance of these AM-manufactured samples are compared with commercial 316L SS samples made using conventional methods. Additionally, methods to enhance the tribological performance of additive-manufactured SS samples are explored. Four disk samples with a diameter of 75 mm and a thickness of 10 mm are prepared. Two of them (Group A) are prepared from a purchased SS bar using a milling method. The other two disks (Group B), with the same dimensions, are made of Gas Atomized 316L Stainless Steel (size range: 15-45 µm) purchased from Carpenter Additive and produced using Laser Powder Bed Fusion (LPBF). Pin-on-disk tests are conducted on these disks, which have similar surface roughness and hardness levels. Multiple tests are carried out under various operating conditions, including varying loads and/or speeds, and the friction coefficients are measured during these tests. In addition, the evolution of the surface degradation processes is monitored by creating moulds of the wear tracks and quantitatively analyzing the surface morphologies of the mould images. This analysis involves quantifying the depth and width of the wear tracks and analyzing the wear debris generated during the wear processes. The wear mechanisms and wear performance of these two groups of SS samples are compared. The effects of load and speed on the friction coefficient and wear rate are investigated. The ultimate goal is to gain a better understanding of the surface degradation of additive-manufactured SS samples. This knowledge is crucial for enhancing their anti-wear performance and extending their service life.

Keywords: degradation process, additive manufacturing, stainless steel, surface features

Procedia PDF Downloads 79
172 Extraction and Electrochemical Behaviors of Au(III) using Phosphonium-Based Ionic Liquids

Authors: Kyohei Yoshino, Masahiko Matsumiya, Yuji Sasaki

Abstract:

Recently, studies have been conducted on Au(III) extraction using ionic liquids (ILs) as extractants or diluents. ILs such as piperidinium, pyrrolidinium, and pyridinium have been studied as extractants for noble metal extractions. Furthermore, the polarity, hydrophobicity, and solvent miscibility of these ILs can be adjusted depending on their intended use. Therefore, the unique properties of ILs make them functional extraction media. The extraction mechanism of Au(III) using phosphonium-based ILs and relevant thermodynamic studies are yet to be reported. In the present work, we focused on the mechanism of Au(III) extraction and related thermodynamic analyses using phosphonium-based ILs. Triethyl-n-pentyl, triethyl-n-octyl, and triethyl-n-dodecyl phosphonium bis(trifluoromethyl-sulfonyl)amide, [P₂₂₂ₓ][NTf₂], (X = 5, 8, and 12) were investigated for Au(III) extraction. The IL–Au complex was identified as [P₂₂₂₅][AuCl₄] using UV–Vis–NIR and Raman spectroscopic analyses. The extraction behavior of Au(III) was investigated with a change in the [P₂₂₂ₓ][NTf₂]IL concentration from 1.0 × 10–4 to 1.0 × 10–1 mol dm−3. The results indicate that Au(III) can be easily extracted by the anion-exchange reaction in the [P₂₂₂ₓ][NTf₂]IL. The slope range 0.96–1.01 on the plot of log D vs log[P₂₂₂ₓ][NTf2]IL indicates the association of one mole of IL with one mole of [AuCl4−] during extraction. Consequently, [P₂₂₂ₓ][NTf₂] is an anion-exchange extractant for the extraction of Au(III) in the form of anions from chloride media. Thus, this type of phosphonium-based IL proceeds via an anion exchange reaction with Au(III). In order to evaluate the thermodynamic parameters on the Au(III) extraction, the equilibrium constant (logKₑₓ’) was determined from the temperature dependence. The plot of the natural logarithm of Kₑₓ’ vs the inverse of the absolute temperature (T–1) yields a slope proportional to the enthalpy (ΔH). By plotting T–1 vs lnKₑₓ’, a line with a slope range 1.129–1.421 was obtained. Thus, the result indicated that the extraction reaction of Au(III) using the [P₂₂₂ₓ][NTf₂]IL (X=5, 8, and 12) was exothermic (ΔH=-9.39〜-11.81 kJ mol-1). The negative value of TΔS (-4.20〜-5.27 kJ mol-1) indicates that microscopic randomness is preferred in the [P₂₂₂₅][NTf₂]IL extraction system over [P₂₂₂₁₂][NTf₂]IL. The total negative alternation in Gibbs energy (-5.19〜-6.55 kJ mol-1) for the extraction reaction would thus be relatively influenced by the TΔS value on the number of carbon atoms in the alkyl side length, even if the efficiency of ΔH is significantly influenced by the total negative alternations in Gibbs energy. Electrochemical analysis revealed that extracted Au(III) can be reduced in two steps: (i) Au(III)/Au(I) and (ii) Au(I)/Au(0). The diffusion coefficients of the extracted Au(III) species in [P₂₂₂ₓ][NTf₂] (X = 5, 8, and 12) were evaluated from 323 to 373 K using semi-integral and semi-differential analyses. Because of the viscosity of the IL medium, the diffusion coefficient of the extracted Au(III) increases with increasing alkyl chain length. The 4f7/2 spectrum based on X-ray photoelectron spectroscopy revealed that the Au electrodeposits obtained after 10 cycles of continuous extraction and electrodeposition were in the metallic state.

Keywords: au(III), electrodeposition, phosphonium-based ionic liquids, solvent extraction

Procedia PDF Downloads 107
171 Exploratory Tests of Crude Bacteriocins from Autochthonous Lactic Acid Bacteria against Food-Borne Pathogens and Spoilage Bacteria

Authors: M. Naimi, M. B. Khaled

Abstract:

The aim of the present work was to test in vitro inhibition of food pathogens and spoilage bacteria by crude bacteriocins from autochthonous lactic acid bacteria. Thirty autochthonous lactic acid bacteria isolated previously, belonging to the genera: Lactobacillus, Carnobacterium, Lactococcus, Vagococcus, Streptococcus, and Pediococcus, have been screened by an agar spot test and a well diffusion assay against Gram-positive and Gram-negative harmful bacteria: Bacillus cereus, Bacillus subtilis ATCC 6633, Escherichia coli ATCC 8739, Salmonella typhimurium ATCC 14028, Staphylococcus aureus ATCC 6538, and Pseudomonas aeruginosa under conditions means to reduce lactic acid and hydrogen peroxide effect to select bacteria with high bacteriocinogenic potential. Furthermore, crude bacteriocins semiquantification and heat sensitivity to different temperatures (80, 95, 110°C, and 121°C) were performed. Another exploratory test concerning the response of St. aureus ATCC 6538 to the presence of crude bacteriocins was realized. It has been observed by the agar spot test that fifteen candidates were active toward Gram-positive targets strains. The secondary screening demonstrated an antagonistic activity oriented only against St. aureus ATCC 6538, leading to the selection of five isolates: Lm14, Lm21, Lm23, Lm24, and Lm25 with a larger inhibition zone compared to the others. The ANOVA statistical analysis reveals a small variation of repeatability: Lm21: 0.56%, Lm23: 0%, Lm25: 1.67%, Lm14: 1.88%, Lm24: 2.14%. Conversely, slight variation was reported in terms of inhibition diameters: 9.58± 0.40, 9.83± 0.46, and 10.16± 0.24 8.5 ± 0.40 10 mm for, Lm21, Lm23, Lm25, Lm14and Lm24, indicating that the observed potential showed a heterogeneous distribution (BMS = 0.383, WMS = 0.117). The repeatability coefficient calculated displayed 7.35%. As for the bacteriocins semiquantification, the five samples exhibited production amounts about 4.16 for Lm21, Lm23, Lm25 and 2.08 AU/ml for Lm14, Lm24. Concerning the sensitivity the crude bacteriocins were fully insensitive to heat inactivation, until 121°C, they preserved the same inhibition diameter. As to, kinetic of growth , the µmax showed reductions in pathogens load for Lm21, Lm23, Lm25, Lm14, Lm24 of about 42.92%, 84.12%, 88.55%, 54.95%, 29.97% in the second trails. Inversely, this pathogen growth after five hours displayed differences of 79.45%, 12.64%, 11.82%, 87.88%, 85.66% in the second trails, compared to the control. This study showed potential inhibition to the growth of this food pathogen, suggesting the possibility to improve the hygienic food quality.

Keywords: exploratory test, lactic acid bacteria, crude bacteriocins, spoilage, pathogens

Procedia PDF Downloads 213
170 Tests for Zero Inflation in Count Data with Measurement Error in Covariates

Authors: Man-Yu Wong, Siyu Zhou, Zhiqiang Cao

Abstract:

In quality of life, health service utilization is an important determinant of medical resource expenditures on Colorectal cancer (CRC) care, a better understanding of the increased utilization of health services is essential for optimizing the allocation of healthcare resources to services and thus for enhancing the service quality, especially for high expenditure on CRC care like Hong Kong region. In assessing the association between the health-related quality of life (HRQOL) and health service utilization in patients with colorectal neoplasm, count data models can be used, which account for over dispersion or extra zero counts. In our data, the HRQOL evaluation is a self-reported measure obtained from a questionnaire completed by the patients, misreports and variations in the data are inevitable. Besides, there are more zero counts from the observed number of clinical consultations (observed frequency of zero counts = 206) than those from a Poisson distribution with mean equal to 1.33 (expected frequency of zero counts = 156). This suggests that excess of zero counts may exist. Therefore, we study tests for detecting zero-inflation in models with measurement error in covariates. Method: Under classical measurement error model, the approximate likelihood function for zero-inflation Poisson regression model can be obtained, then Approximate Maximum Likelihood Estimation(AMLE) can be derived accordingly, which is consistent and asymptotically normally distributed. By calculating score function and Fisher information based on AMLE, a score test is proposed to detect zero-inflation effect in ZIP model with measurement error. The proposed test follows asymptotically standard normal distribution under H0, and it is consistent with the test proposed for zero-inflation effect when there is no measurement error. Results: Simulation results show that empirical power of our proposed test is the highest among existing tests for zero-inflation in ZIP model with measurement error. In real data analysis, with or without considering measurement error in covariates, existing tests, and our proposed test all imply H0 should be rejected with P-value less than 0.001, i.e., zero-inflation effect is very significant, ZIP model is superior to Poisson model for analyzing this data. However, if measurement error in covariates is not considered, only one covariate is significant; if measurement error in covariates is considered, only another covariate is significant. Moreover, the direction of coefficient estimations for these two covariates is different in ZIP regression model with or without considering measurement error. Conclusion: In our study, compared to Poisson model, ZIP model should be chosen when assessing the association between condition-specific HRQOL and health service utilization in patients with colorectal neoplasm. and models taking measurement error into account will result in statistically more reliable and precise information.

Keywords: count data, measurement error, score test, zero inflation

Procedia PDF Downloads 289
169 Dependence of Densification, Hardness and Wear Behaviors of Ti6Al4V Powders on Sintering Temperature

Authors: Adewale O. Adegbenjo, Elsie Nsiah-Baafi, Mxolisi B. Shongwe, Mercy Ramakokovhu, Peter A. Olubambi

Abstract:

The sintering step in powder metallurgy (P/M) processes is very sensitive as it determines to a large extent the properties of the final component produced. Spark plasma sintering over the past decade has been extensively used in consolidating a wide range of materials including metallic alloy powders. This novel, non-conventional sintering method has proven to be advantageous offering full densification of materials, high heating rates, low sintering temperatures, and short sintering cycles over conventional sintering methods. Ti6Al4V has been adjudged the most widely used α+β alloy due to its impressive mechanical performance in service environments, especially in the aerospace and automobile industries being a light metal alloy with the capacity for fuel efficiency needed in these industries. The P/M route has been a promising method for the fabrication of parts made from Ti6Al4V alloy due to its cost and material loss reductions and the ability to produce near net and intricate shapes. However, the use of this alloy has been largely limited owing to its relatively poor hardness and wear properties. The effect of sintering temperature on the densification, hardness, and wear behaviors of spark plasma sintered Ti6Al4V powders was investigated in this present study. Sintering of the alloy powders was performed in the 650–850°C temperature range at a constant heating rate, applied pressure and holding time of 100°C/min, 50 MPa and 5 min, respectively. Density measurements were carried out according to Archimedes’ principle and microhardness tests were performed on sectioned as-polished surfaces at a load of 100gf and dwell time of 15 s. Dry sliding wear tests were performed at varied sliding loads of 5, 15, 25 and 35 N using the ball-on-disc tribometer configuration with WC as the counterface material. Microstructural characterization of the sintered samples and wear tracks were carried out using SEM and EDX techniques. The density and hardness characteristics of sintered samples increased with increasing sintering temperature. Near full densification (99.6% of the theoretical density) and Vickers’ micro-indentation hardness of 360 HV were attained at 850°C. The coefficient of friction (COF) and wear depth improved significantly with increased sintering temperature under all the loading conditions examined, except at 25 N indicating better mechanical properties at high sintering temperatures. Worn surface analyses showed the wear mechanism was a synergy of adhesive and abrasive wears, although the former was prevalent.

Keywords: hardness, powder metallurgy, spark plasma sintering, wear

Procedia PDF Downloads 275
168 Identifying Protein-Coding and Non-Coding Regions in Transcriptomes

Authors: Angela U. Makolo

Abstract:

Protein-coding and Non-coding regions determine the biology of a sequenced transcriptome. Research advances have shown that Non-coding regions are important in disease progression and clinical diagnosis. Existing bioinformatics tools have been targeted towards Protein-coding regions alone. Therefore, there are challenges associated with gaining biological insights from transcriptome sequence data. These tools are also limited to computationally intensive sequence alignment, which is inadequate and less accurate to identify both Protein-coding and Non-coding regions. Alignment-free techniques can overcome the limitation of identifying both regions. Therefore, this study was designed to develop an efficient sequence alignment-free model for identifying both Protein-coding and Non-coding regions in sequenced transcriptomes. Feature grouping and randomization procedures were applied to the input transcriptomes (37,503 data points). Successive iterations were carried out to compute the gradient vector that converged the developed Protein-coding and Non-coding Region Identifier (PNRI) model to the approximate coefficient vector. The logistic regression algorithm was used with a sigmoid activation function. A parameter vector was estimated for every sample in 37,503 data points in a bid to reduce the generalization error and cost. Maximum Likelihood Estimation (MLE) was used for parameter estimation by taking the log-likelihood of six features and combining them into a summation function. Dynamic thresholding was used to classify the Protein-coding and Non-coding regions, and the Receiver Operating Characteristic (ROC) curve was determined. The generalization performance of PNRI was determined in terms of F1 score, accuracy, sensitivity, and specificity. The average generalization performance of PNRI was determined using a benchmark of multi-species organisms. The generalization error for identifying Protein-coding and Non-coding regions decreased from 0.514 to 0.508 and to 0.378, respectively, after three iterations. The cost (difference between the predicted and the actual outcome) also decreased from 1.446 to 0.842 and to 0.718, respectively, for the first, second and third iterations. The iterations terminated at the 390th epoch, having an error of 0.036 and a cost of 0.316. The computed elements of the parameter vector that maximized the objective function were 0.043, 0.519, 0.715, 0.878, 1.157, and 2.575. The PNRI gave an ROC of 0.97, indicating an improved predictive ability. The PNRI identified both Protein-coding and Non-coding regions with an F1 score of 0.970, accuracy (0.969), sensitivity (0.966), and specificity of 0.973. Using 13 non-human multi-species model organisms, the average generalization performance of the traditional method was 74.4%, while that of the developed model was 85.2%, thereby making the developed model better in the identification of Protein-coding and Non-coding regions in transcriptomes. The developed Protein-coding and Non-coding region identifier model efficiently identified the Protein-coding and Non-coding transcriptomic regions. It could be used in genome annotation and in the analysis of transcriptomes.

Keywords: sequence alignment-free model, dynamic thresholding classification, input randomization, genome annotation

Procedia PDF Downloads 68
167 Physical Model Testing of Storm-Driven Wave Impact Loads and Scour at a Beach Seawall

Authors: Sylvain Perrin, Thomas Saillour

Abstract:

The Grande-Motte port and seafront development project on the French Mediterranean coastline entailed evaluating wave impact loads (pressures and forces) on the new beach seawall and comparing the resulting scour potential at the base of the existing and new seawall. A physical model was built at ARTELIA’s hydraulics laboratory in Grenoble (France) to provide insight into the evolution of scouring overtime at the front of the wall, quasi-static and impulsive wave force intensity and distribution on the wall, and water and sand overtopping discharges over the wall. The beach was constituted of fine sand and approximately 50 m wide above mean sea level (MSL). Seabed slopes were in the range of 0.5% offshore to 1.5% closer to the beach. A smooth concrete structure will replace the existing concrete seawall with an elevated curved crown wall. Prior the start of breaking (at -7 m MSL contour), storm-driven maximum spectral significant wave heights of 2.8 m and 3.2 m were estimated for the benchmark historical storm event dated of 1997 and the 50-year return period storms respectively, resulting in 1 m high waves at the beach. For the wave load assessment, a tensor scale measured wave forces and moments and five piezo / piezo-resistive pressure sensors were placed on the wall. Light-weight sediment physical model and pressure and force measurements were performed with scale 1:18. The polyvinyl chloride light-weight particles used to model the prototype silty sand had a density of approximately 1 400 kg/m3 and a median diameter (d50) of 0.3 mm. Quantitative assessments of the seabed evolution were made using a measuring rod and also a laser scan survey. Testing demonstrated the occurrence of numerous impulsive wave impacts on the reflector (22%), induced not by direct wave breaking but mostly by wave run-up slamming on the top curved part of the wall. Wave forces of up to 264 kilonewtons and impulsive pressure spikes of up to 127 kilonewtons were measured. Maximum scour of -0.9 m was measured for the new seawall versus -0.6 m for the existing seawall, which is imputable to increased wave reflection (coefficient was 25.7 - 30.4% vs 23.4 - 28.6%). This paper presents a methodology for the setup and operation of a physical model in order to assess the hydrodynamic and morphodynamic processes at a beach seawall during storms events. It discusses the pros and cons of such methodology versus others, notably regarding structures peculiarities and model effects.

Keywords: beach, impacts, scour, seawall, waves

Procedia PDF Downloads 153
166 Curriculum Check in Industrial Design, Based on Knowledge Management in Iran Universities

Authors: Maryam Mostafaee, Hassan Sadeghi Naeini, Sara Mostowfi

Abstract:

Today’s Knowledge management (KM), plays an important role in organizations. Basically, knowledge management is in the relation of using it for taking advantage of work forces in an organization for forwarding the goals and demand of that organization used at the most. The purpose of knowledge management is not only to manage existing documentation, information, and Data through an organization, but the most important part of KM is to control most important and key factor of those information and Data. For sure it is to chase the information needed for the employees in the right time of needed to take from genuine source for bringing out the best performance and result then in this matter the performance of organization will be at most of it. There are a lot of definitions over the objective of management released. Management is the science that in force the accurate knowledge with repeating to the organization to shape it and take full advantages for reaching goals and targets in the organization to be used by employees and users, but the definition of Knowledge based on Kalinz dictionary is: Facts, emotions or experiences known by man or group of people is ‘ knowledge ‘: Based on the Merriam Webster Dictionary: the act or skill of controlling and making decision about a business, department, sport team, etc, based on the Oxford Dictionary: Efficient handling of information and resources within a commercial organization, and based on the Oxford Dictionary: The art or process of designing manufactured products: the scale is a beautiful work of industrial design. When knowledge management performed executive in universities, discovery and create a new knowledge be facilitated. Make procedures between different units for knowledge exchange. College's officials and employees understand the importance of knowledge for University's success and will make more efforts to prevent the errors. In this strategy, is explored factors and affective trends and manage of it in University. In this research, Iranian universities for a time being analyzed that over usage of knowledge management, how they are behaving and having understood this matter: 1. Discovery of knowledge management in Iranian Universities, 2. Transferring exciting knowledge between faculties and unites, 3. Participate of employees for getting and using and transferring knowledge, 4.The accessibility of valid sources, 5. Researching over factors and correct processes in the university. We are pointing in some examples that we have already analyzed which is: -Enabling better and faster decision-making, -Making it easy to find relevant information and resources, -Reusing ideas, documents, and expertise, -Avoiding redundant effort. Consequence: It is found that effectiveness of knowledge management in the Industrial design field is low. Based on filled checklist by Education officials and professors in universities, and coefficient of effectiveness Calculate, knowledge management could not get the right place.

Keywords: knowledge management, industrial design, educational curriculum, learning performance

Procedia PDF Downloads 371
165 Vulnerability Assessment of Groundwater Quality Deterioration Using PMWIN Model

Authors: A. Shakoor, M. Arshad

Abstract:

The utilization of groundwater resources in irrigation has significantly increased during the last two decades due to constrained canal water supplies. More than 70% of the farmers in the Punjab, Pakistan, depend directly or indirectly on groundwater to meet their crop water demands and hence, an unchecked paradigm shift has resulted in aquifer depletion and deterioration. Therefore, a comprehensive research was carried at central Punjab-Pakistan, regarding spatiotemporal variation in groundwater level and quality. Processing MODFLOW for window (PMWIN) and MT3D (solute transport model) models were used for existing and future prediction of groundwater level and quality till 2030. The comprehensive data set of aquifer lithology, canal network, groundwater level, groundwater salinity, evapotranspiration, groundwater abstraction, recharge etc. were used in PMWIN model development. The model was thus, successfully calibrated and validated with respect to groundwater level for the periods of 2003 to 2007 and 2008 to 2012, respectively. The coefficient of determination (R2) and model efficiency (MEF) for calibration and validation period were calculated as 0.89 and 0.98, respectively, which argued a high level of correlation between the calculated and measured data. For solute transport model (MT3D), the values of advection and dispersion parameters were used. The model used for future scenario up to 2030, by assuming that there would be no uncertain change in climate and groundwater abstraction rate would increase gradually. The model predicted results revealed that the groundwater would decline from 0.0131 to 1.68m/year during 2013 to 2030 and the maximum decline would be on the lower side of the study area, where infrastructure of canal system is very less. This lowering of groundwater level might cause an increase in the tubewell installation and pumping cost. Similarly, the predicted total dissolved solids (TDS) of the groundwater would increase from 6.88 to 69.88mg/L/year during 2013 to 2030 and the maximum increase would be on lower side. It was found that in 2030, the good quality would reduce by 21.4%, while marginal and hazardous quality water increased by 19.28 and 2%, respectively. It was found from the simulated results that the salinity of the study area had increased due to the intrusion of salts. The deterioration of groundwater quality would cause soil salinity and ultimately the reduction in crop productivity. It was concluded from the predicted results of groundwater model that the groundwater deteriorated with the depth of water table i.e. TDS increased with declining groundwater level. It is recommended that agronomic and engineering practices i.e. land leveling, rainwater harvesting, skimming well, ASR (Aquifer Storage and Recovery Wells) etc. should be integrated to meliorate management of groundwater for higher crop production in salt affected soils.

Keywords: groundwater quality, groundwater management, PMWIN, MT3D model

Procedia PDF Downloads 378
164 Simulation of the Flow in a Circular Vertical Spillway Using a Numerical Model

Authors: Mohammad Zamani, Ramin Mansouri

Abstract:

Spillways are one of the most important hydraulic structures of dams that provide the stability of the dam and downstream areas at the time of flood. A circular vertical spillway with various inlet forms is very effective when there is not enough space for the other spillway. Hydraulic flow in a vertical circular spillway is divided into three groups: free, orifice, and under pressure (submerged). In this research, the hydraulic flow characteristics of a Circular Vertical Spillway are investigated with the CFD model. Two-dimensional unsteady RANS equations were solved numerically using Finite Volume Method. The PISO scheme was applied for the velocity-pressure coupling. The mostly used two-equation turbulence models, k-ε and k-ω, were chosen to model Reynolds shear stress term. The power law scheme was used for the discretization of momentum, k, ε, and ω equations. The VOF method (geometrically reconstruction algorithm) was adopted for interface simulation. In this study, three types of computational grids (coarse, intermediate, and fine) were used to discriminate the simulation environment. In order to simulate the flow, the k-ε (Standard, RNG, Realizable) and k-ω (standard and SST) models were used. Also, in order to find the best wall function, two types, standard wall, and non-equilibrium wall function, were investigated. The laminar model did not produce satisfactory flow depth and velocity along the Morning-Glory spillway. The results of the most commonly used two-equation turbulence models (k-ε and k-ω) were identical. Furthermore, the standard wall function produced better results compared to the non-equilibrium wall function. Thus, for other simulations, the standard k-ε with the standard wall function was preferred. The comparison criterion in this study is also the trajectory profile of jet water. The results show that the fine computational grid, the input speed condition for the flow input boundary, and the output pressure for the boundaries that are in contact with the air provide the best possible results. Also, the standard wall function is chosen for the effect of the wall function, and the turbulent model k-ε (Standard) has the most consistent results with experimental results. When the jet gets closer to the end of the basin, the computational results increase with the numerical results of their differences. The mesh with 10602 nodes, turbulent model k-ε standard and the standard wall function, provide the best results for modeling the flow in a vertical circular Spillway. There was a good agreement between numerical and experimental results in the upper and lower nappe profiles. In the study of water level over crest and discharge, in low water levels, the results of numerical modeling are good agreement with the experimental, but with the increasing water level, the difference between the numerical and experimental discharge is more. In the study of the flow coefficient, by decreasing in P/R ratio, the difference between the numerical and experimental result increases.

Keywords: circular vertical, spillway, numerical model, boundary conditions

Procedia PDF Downloads 86
163 Mental Balance, Emotional Balance, and Stress Management: The Role of Ancient Vedic Philosophy from India

Authors: Emily Schulz

Abstract:

The ancient Vedic culture from India had traditions that supported all aspects of health, including psychological health, and are relevant in the current era. These traditions have been compiled by Professor Dr. Purna, a rare Himalayan Master, into the Purna Health Management System (PHMS). The PHMS is a unique, holistic, and integrated approach to health management. It is comprised of four key factors: Health, Fitness, and Nutrition (HF&N), Life Balance (Stress Management) (LB-SM), Spiritual Growth and Development (SG&D); and Living in Harmony with the Natural Environment (LHWNE). The purpose of the PHMS is to give people the tools to take responsibility for managing their own holistic health and wellbeing. A study using a cross-sectional mixed-methods anonymous online survey was conducted during 2017-2018. Adult students of Professor Dr. Purna were invited to participate through announcements made at various events He held throughout the globe. Follow-up emails were sent with consenting language for interested parties and provided them with a link to the survey. Participation in the study was completely voluntary and no incentives were given to respond to the survey. The overall aim of the study was to investigate the effectiveness of implementation of the PHMS on practitioners' emotional balance. However, given the holistic nature of the PHMS, survey questions also inquired about participants’ physical health, stress level, ability to manage stress, and wellbeing using Likert scales. The survey also included some open-ended questions to gain an understanding of the participants’ experiences with the PHMS relative to their emotional balance. In total, 52 people out of 253 potential respondents participated in the study. Data were analyzed using nonparametric Spearman’s Rho correlation coefficient (rs) since the data were not on a normal distribution. Statistical significance was set at p < .05. Results of the study suggested that there are moderate to strong statistically significant relationships (p < .001) between participants' frequent implementation of each of the four key factors of the PHMS and self-reported mental/emotional health (HF&N rs = 0.42; LB-SM rs = 0.54; SG&D rs = 0.49; LHWNE rs = 0.45) Results also demonstrated statistically significant relationships (p < .001) between participants' frequent implementation of each of the four key factors of the PHMS and their self-reported ability to manage stress (HF&N rs = 0.44; LB-SM rs = 0.55; SG&D rs = 0.39; LHWNE rs = 0.55). Additionally, those who reported experiencing better physical health also reported better mental/emotional health (rs = 0.49, p < .001) and better ability to manage stress (rs = 0.46, p < .001). The findings of this study suggest that wisdom from the ancient Vedic culture may be useful for those working in the field of psychology and related fields who would like to assist clients in calming their mind and emotions and managing their stress levels.

Keywords: balanced emotions, balanced mind, stress management, Vedic philosophy

Procedia PDF Downloads 123
162 The Effect of Paper Based Concept Mapping on Students' Academic Achievement and Attitude in Science Education

Authors: Orhan Akınoğlu, Arif Çömek, Ersin Elmacı, Tuğba Gündoğdu

Abstract:

The concept map is known to be a powerful tool to organize the ideas and concepts of an individuals’ mind. This tool is a kind of visual map that illustrates the relationships between the concepts of a certain subject. The effect of concept mapping on cognitive and affective qualities is one of the research topics among educational researchers for last decades. We educators want to utilize it both as an instructional tool or an assessment tool in classes. For that reason, this study aimed to determine the effect of concept mapping as a learning strategy in science classes on students’ academic achievement and attitude. The research employed a randomized pre-test post-test control group design. Data collected from 60 sixth grade students participated in the study from a randomly selected primary school in Turkey. Sixth-grade classes of the school were analyzed according to students’ academic achievement, science attitude, gender, mathematics, science courses grades, and their GPAs before the implementation. Two of the classes found to be equivalent (t=0,983, p>0,05) and one of them was defined as experimental and the other one control group randomly. During a 5-weeks period, the experimental group students (N=30) used the paper-based concept mapping method while the control group students (N=30) were taught with the traditional approach according to the science and technology education curriculum for light and sound subject. Both groups were taught by the same teacher who is experienced using concept mapping in science classes. Before the implementation, the teacher explained the theory of the concept maps and showed how to create paper-based concept mapping individually to the experimental group students for two hours. Then for two following hours she asked them to create some concept maps related to their former science subjects and gave them feedback by reviewing their concept maps to be sure that they can create during the implementation. The data were collected by science achievement test, science attitude scale and personal information form. Science achievement test and science attitude scale were implemented as pre-test and post-test while personal information form was implemented just as once. The reliability coefficient of the achievement test was KR20=0,76 and Cronbach’s Alpha of the attitude scale was 0,89. SPSS statistical software was used to analyze the data. According to the results, there was a statistically significant difference between the experimental and control group for academic achievement but not for attitude. The experimental group had significantly greater gains from academic achievement test than the control group (t=0,02, p<0,05). The findings showed that the paper-and-pencil concept mapping can be used as an effective method for students’ academic achievement in science classes. The results have implications for further researches.

Keywords: concept mapping, science education, constructivism, academic achievement, science attitude

Procedia PDF Downloads 410
161 Outcome-Based Education as Mediator of the Effect of Blended Learning on the Student Performance in Statistics

Authors: Restituto I. Rodelas

Abstract:

The higher education has adopted the outcomes-based education from K-12. In this approach, the teacher uses any teaching and learning strategies that enable the students to achieve the learning outcomes. The students may be required to exert more effort and figure things out on their own. Hence, outcomes-based students are assumed to be more responsible and more capable of applying the knowledge learned. Another approach that the higher education in the Philippines is starting to adopt from other countries is blended learning. This combination of classroom and fully online instruction and learning is expected to be more effective. Participating in the online sessions, however, is entirely up to the students. Thus, the effect of blended learning on the performance of students in Statistics may be mediated by outcomes-based education. If there is a significant positive mediating effect, then blended learning can be optimized by integrating outcomes-based education. In this study, the sample will consist of four blended learning Statistics classes at Jose Rizal University in the second semester of AY 2015–2016. Two of these classes will be assigned randomly to the experimental group that will be handled using outcomes-based education. The two classes in the control group will be handled using the traditional lecture approach. Prior to the discussion of the first topic, a pre-test will be administered. The same test will be given as posttest after the last topic is covered. In order to establish equality of the groups’ initial knowledge, single factor ANOVA of the pretest scores will be performed. Single factor ANOVA of the posttest-pretest score differences will also be conducted to compare the performance of the experimental and control groups. When a significant difference is obtained in any of these ANOVAs, post hoc analysis will be done using Tukey's honestly significant difference test (HSD). Mediating effect will be evaluated using correlation and regression analyses. The groups’ initial knowledge are equal when the result of pretest scores ANOVA is not significant. If the result of score differences ANOVA is significant and the post hoc test indicates that the classes in the experimental group have significantly different scores from those in the control group, then outcomes-based education has a positive effect. Let blended learning be the independent variable (IV), outcomes-based education be the mediating variable (MV), and score difference be the dependent variable (DV). There is mediating effect when the following requirements are satisfied: significant correlation of IV to DV, significant correlation of IV to MV, significant relationship of MV to DV when both IV and MV are predictors in a regression model, and the absolute value of the coefficient of IV as sole predictor is larger than that when both IV and MV are predictors. With a positive mediating effect of outcomes-base education on the effect of blended learning on student performance, it will be recommended to integrate outcomes-based education into blended learning. This will yield the best learning results.

Keywords: outcome-based teaching, blended learning, face-to-face, student-centered

Procedia PDF Downloads 291
160 Improved Functions For Runoff Coefficients And Smart Design Of Ditches & Biofilters For Effective Flow detention

Authors: Thomas Larm, Anna Wahlsten

Abstract:

An international literature study has been carried out for comparison of commonly used methods for the dimensioning of transport systems and stormwater facilities for flow detention. The focus of the literature study regarding the calculation of design flow and detention has been the widely used Rational method and its underlying parameters. The impact of chosen design parameters such as return time, rain intensity, runoff coefficient, and climate factor have been studied. The parameters used in the calculations have been analyzed regarding how they can be calculated and within what limits they can be used. Data used within different countries have been specified, e.g., recommended rainfall return times, estimated runoff times, and climate factors used for different cases and time periods. The literature study concluded that the determination of runoff coefficients is the most uncertain parameter that also affects the calculated flow and required detention volume the most. Proposals have been developed for new runoff coefficients, including a new proposed method with equations for calculating runoff coefficients as a function of return time (years) and rain intensity (l/s/ha), respectively. Suggestions have been made that it is recommended not to limit the use of the Rational Method to a specific catchment size, contrary to what many design manuals recommend, with references to this. The proposed relationships between return time or rain intensity and runoff coefficients need further investigation and to include the quantification of uncertainties. Examples of parameters that have not been considered are the influence on the runoff coefficients of different dimensioning rain durations and the degree of water saturation of green areas, which will be investigated further. The influence of climate effects and design rain on the dimensioning of the stormwater facilities grassed ditches and biofilters (bio retention systems) has been studied, focusing on flow detention capacity. We have investigated how the calculated runoff coefficients regarding climate effect and the influence of changed (increased) return time affect the inflow to and dimensioning of the stormwater facilities. We have developed a smart design of ditches and biofilters that results in both high treatment and flow detention effects and compared these with the effect from dry and wet ponds. Studies of biofilters have generally before focused on treatment of pollutants, but their effect on flow volume and how its flow detention capability can improve is only rarely studied. For both the new type of stormwater ditches and biofilters, it is required to be able to simulate their performance in a model under larger design rains and future climate, as these conditions cannot be tested in the field. The stormwater model StormTac Web has been used on case studies. The results showed that the new smart design of ditches and biofilters had similar flow detention capacity as dry and wet ponds for the same facility area.

Keywords: runoff coefficients, flow detention, smart design, biofilter, ditch

Procedia PDF Downloads 88
159 New Gas Geothermometers for the Prediction of Subsurface Geothermal Temperatures: An Optimized Application of Artificial Neural Networks and Geochemometric Analysis

Authors: Edgar Santoyo, Daniel Perez-Zarate, Agustin Acevedo, Lorena Diaz-Gonzalez, Mirna Guevara

Abstract:

Four new gas geothermometers have been derived from a multivariate geo chemometric analysis of a geothermal fluid chemistry database, two of which use the natural logarithm of CO₂ and H2S concentrations (mmol/mol), respectively, and the other two use the natural logarithm of the H₂S/H₂ and CO₂/H₂ ratios. As a strict compilation criterion, the database was created with gas-phase composition of fluids and bottomhole temperatures (BHTM) measured in producing wells. The calibration of the geothermometers was based on the geochemical relationship existing between the gas-phase composition of well discharges and the equilibrium temperatures measured at bottomhole conditions. Multivariate statistical analysis together with the use of artificial neural networks (ANN) was successfully applied for correlating the gas-phase compositions and the BHTM. The predicted or simulated bottomhole temperatures (BHTANN), defined as output neurons or simulation targets, were statistically compared with measured temperatures (BHTM). The coefficients of the new geothermometers were obtained from an optimized self-adjusting training algorithm applied to approximately 2,080 ANN architectures with 15,000 simulation iterations each one. The self-adjusting training algorithm used the well-known Levenberg-Marquardt model, which was used to calculate: (i) the number of neurons of the hidden layer; (ii) the training factor and the training patterns of the ANN; (iii) the linear correlation coefficient, R; (iv) the synaptic weighting coefficients; and (v) the statistical parameter, Root Mean Squared Error (RMSE) to evaluate the prediction performance between the BHTM and the simulated BHTANN. The prediction performance of the new gas geothermometers together with those predictions inferred from sixteen well-known gas geothermometers (previously developed) was statistically evaluated by using an external database for avoiding a bias problem. Statistical evaluation was performed through the analysis of the lowest RMSE values computed among the predictions of all the gas geothermometers. The new gas geothermometers developed in this work have been successfully used for predicting subsurface temperatures in high-temperature geothermal systems of Mexico (e.g., Los Azufres, Mich., Los Humeros, Pue., and Cerro Prieto, B.C.) as well as in a blind geothermal system (known as Acoculco, Puebla). The last results of the gas geothermometers (inferred from gas-phase compositions of soil-gas bubble emissions) compare well with the temperature measured in two wells of the blind geothermal system of Acoculco, Puebla (México). Details of this new development are outlined in the present research work. Acknowledgements: The authors acknowledge the funding received from CeMIE-Geo P09 project (SENER-CONACyT).

Keywords: artificial intelligence, gas geochemistry, geochemometrics, geothermal energy

Procedia PDF Downloads 354
158 The Impacts of Export in Stimulating Economic Growth in Ethiopia: ARDL Model Analysis

Authors: Natnael Debalklie Teshome

Abstract:

The purpose of the study was to empirically investigate the impacts of export performance and its volatility on economic growth in the Ethiopian economy. To do so, time-series data of the sample period from 1974/75 – 2017/18 were collected from databases and annual reports of IMF, WB, NBE, MoFED, UNCTD, and EEA. The extended Cobb-Douglas production function of the neoclassical growth model framed under the endogenous growth theory was used to consider both the performance and instability aspects of export. First, the unit root test was conducted using ADF and PP tests, and data were found in stationery with a mix of I(0) and I(1). Then, the bound test and Wald test were employed, and results showed that there exists long-run co-integration among study variables. All the diagnostic test results also reveal that the model fulfills the criteria of the best-fitted model. Therefore, the ARDL model and VECM were applied to estimate the long-run and short-run parameters, while the Granger causality test was used to test the causality between study variables. The empirical findings of the study reveal that only export and coefficient of variation had significant positive and negative impacts on RGDP in the long run, respectively, while other variables were found to have an insignificant impact on the economic growth of Ethiopia. In the short run, except for gross capital formation and coefficients of variation, which have a highly significant positive impact, all other variables have a strongly significant negative impact on RGDP. This shows exports had a strong, significant impact in both the short-run and long-run periods. However, its positive and statistically significant impact is observed only in the long run. Similarly, there was a highly significant export fluctuation in both periods, while significant commodity concentration (CCI) was observed only in the short run. Moreover, the Granger causality test reveals that unidirectional causality running from export performance to RGDP exists in the long run and from both export and RGDP to CCI in the short run. Therefore, the export-led growth strategy should be sustained and strengthened. In addition, boosting the industrial sector is vital to bring structural transformation. Hence, the government has to give different incentive schemes and supportive measures to exporters to extract the spillover effects of exports. Greater emphasis on price-oriented diversification and specialization on major primary products that the country has a comparative advantage should also be given to reduce value-based instability in the export earnings of the country. The government should also strive to increase capital formation and human capital development via enhancing investments in technology and quality of education to accelerate the economic growth of the country.

Keywords: export, economic growth, export diversification, instability, co-integration, granger causality, Ethiopian economy

Procedia PDF Downloads 79
157 Evaluation of Cooperative Hand Movement Capacity in Stroke Patients Using the Cooperative Activity Stroke Assessment

Authors: F. A. Thomas, M. Schrafl-Altermatt, R. Treier, S. Kaufmann

Abstract:

Stroke is the main cause of adult disability. Especially upper limb function is affected in most patients. Recently, cooperative hand movements have been shown to be a promising type of upper limb training in stroke rehabilitation. In these movements, which are frequently found in activities of daily living (e.g. opening a bottle, winding up a blind), the force of one upper limb has to be equally counteracted by the other limb to successfully accomplish a task. The use of standardized and reliable clinical assessments is essential to evaluate the efficacy of therapy and the functional outcome of a patient. Many assessments for upper limb function or impairment are available. However, the evaluation of cooperative hand movement tasks are rarely included in those. Thus, the aim of this study was (i) to develop a novel clinical assessment (CASA - Cooperative Activity Stroke Assessment) for the evaluation of patients’ capacity to perform cooperative hand movements and (ii) to test its inter- and interrater reliability. Furthermore, CASA scores were compared to current gold standard assessments for upper extremity in stroke patients (i.e. Fugl-Meyer Assessment, Box & Blocks Test). The CASA consists of five cooperative activities of daily living including (1) opening a jar, (2) opening a bottle, (3) open and closing of a zip, (4) unscrew a nut and (5) opening a clipbox. Here, the goal is to accomplish the tasks as fast as possible. In addition to the quantitative rating (i.e. time) which is converted to a 7-point scale, also the quality of the movement is rated in a 4-point scale. To test the reliability of CASA, fifteen stroke subjects were tested within a week twice by the same two raters. Intra-and interrater reliability was calculated using the intraclass correlation coefficient (ICC) for total CASA score and single items. Furthermore, Pearson-correlation was used to compare the CASA scores to the scores of Fugl-Meyer upper limb assessment and the box and blocks test, which were assessed in every patient additionally to the CASA. ICC scores of the total CASA score indicated an excellent- and single items established a good to excellent inter- and interrater reliability. Furthermore, the CASA score was significantly correlated to the Fugl-Meyer and Box & Blocks score. The CASA provides a reliable assessment for cooperative hand movements which are crucial for many activities of daily living. Due to its non-costly setup, easy and fast implementation, we suggest it to be well suitable for clinical application. In conclusion, the CASA is a useful tool in assessing the functional status and therapy related recovery in cooperative hand movement capacity in stroke patients.

Keywords: activitites of daily living, clinical assessment, cooperative hand movements, reliability, stroke

Procedia PDF Downloads 320
156 Wrist Pain, Technological Device Used, and Perceived Academic Performance Among the College of Computer Studies Students

Authors: Maquiling Jhuvie Jane R., Ojastro Regine B., Peroja Loreille Marie B., Pinili Joy Angela., Salve Genial Gail M., Villavicencio Marielle Irene B., Yap Alther Francis Garth B.

Abstract:

Introduction: This study investigated the impact of prolonged device usage on wrist pain and perceived academic performance among college students in Computer Studies. The research aims to explore the correlation between the frequency of technological device use and the incidence of wrist pain, as well as how this pain affects students' academic performance. The study seeks to provide insights that could inform interventions to promote better musculoskeletal health among students engaged in intensive technology use to further improve their academic performance. Method: The study utilized descriptive-correlational and comparative design, focusing on bona fide students from Silliman University’s College of Computer Studies during the second semester of 2023-2024. Participants were recruited through a survey sent via school email, with responses collected until March 30, 2024. Data was gathered using a password-protected device and Google Forms, ensuring restricted access to raw data. The demographic profile was summarized, and the prevalence of wrist pain and device usage were analyzed using percentages and weighted means. Statistical analyses included Spearman’s rank correlation coefficient to assess the relationship between wrist pain and device usage and an Independent T-test to evaluate differences in academic performance based on wrist pain presence. Alpha was set at 0.05. Results: The study revealed that 40% of College of Computer Studies students experience wrist pain, with 2 out of every 5 students affected. Laptops and desktops were the most frequently used devices for academic work, achieving a weighted mean of 4.511, while mobile phones and tablets received lower means of 4.183 and 1.911, respectively. The average academic performance score among students was 29.7, classified as ‘Good Performance.’ Notably, there was no significant relationship between the frequency of device usage and wrist pain, as indicated by p-values exceeding 0.05. However, a significant difference in perceived academic performance was observed, with students without wrist pain scoring an average of 30.39 compared to 28.72 for those with wrist pain and a p-value of 0.0134 confirming this distinction. Conclusion: The study revealed that about 40% of students in the College of Computer Studies experience wrist pain, but there is no significant link between device usage and pain occurrence. However, students without wrist pain demonstrated better academic performance than those with pain, suggesting that wrist health may impact academic success. These findings imply that physical therapy practices in the Philippines should focus on preventive strategies and ergonomic education to improve student health and performance.

Keywords: wrist pain, frequency of use of technological devices, perceived academic performance, physical therapy

Procedia PDF Downloads 16
155 Eggshell Waste Bioprocessing for Sustainable Acid Phosphatase Production and Minimizing Environmental Hazards

Authors: Soad Abubakr Abdelgalil, Gaber Attia Abo-Zaid, Mohamed Mohamed Yousri Kaddah

Abstract:

Background: The Environmental Protection Agency has listed eggshell waste as the 15th most significant food industry pollution hazard. The utilization of eggshell waste as a source of renewable energy has been a hot topic in recent years. Therefore, finding a sustainable solution for the recycling and valorization of eggshell waste by investigating its potential to produce acid phosphatase (ACP) and organic acids by the newly-discovered B. sonorensis was the target of the current investigation. Results: The most potent ACP-producing B. sonorensis strain ACP2 was identified as a local bacterial strain obtained from the effluent of paper and pulp industries on basis of molecular and morphological characterization. The use of consecutive statistical experimental approaches of Plackett-Burman Design (PBD), and Orthogonal Central Composite Design (OCCD), followed by pH-uncontrolled cultivation conditions in a 7 L bench-top bioreactor, revealed an innovative medium formulation that substantially improved ACP production, reaching 216 U L⁻¹ with ACP yield coefficient Yp/x of 18.2 and a specific growth rate (µ) of 0.1 h⁻¹. The metals Ag+, Sn+, and Cr+ were the most efficiently released from eggshells during the solubilization process by B. sonorensis. The uncontrolled pH culture condition is the most suited and favored setting for improving the ACP and organic acids production simultaneously. Quantitative and qualitative analyses of produced organic acids were carried out using liquid chromatography-tandem mass spectrometry (LC-MS/MS). Lactic acid, citric acid, and hydroxybenzoic acid isomer were the most common organic acids produced throughout the cultivation process. The findings of thermogravimetric analysis (TGA), differential scan calorimeter (DSC), scanning electron microscope (SEM), energy-dispersive spectroscopy (EDS), Fourier-Transform Infrared Spectroscopy (FTIR), and X-Ray Diffraction (XRD) analysis emphasize the significant influence of organic acids and ACP activity on the solubilization of eggshells particles. Conclusions: This study emphasized robust microbial engineering approaches for the large-scale production of a newly discovered acid phosphatase accompanied by organic acids production from B. sonorensis. The biovalorization of the eggshell waste and the production of cost-effective ACP and organic acids were integrated into the current study, and this was done through the implementation of a unique and innovative medium formulation design for eggshell waste management, as well as scaling up ACP production on a bench-top scale.

Keywords: chicken eggshells waste, bioremediation, statistical experimental design, batch fermentation

Procedia PDF Downloads 376
154 Peculiarities of Snow Cover in Belarus

Authors: Aleh Meshyk, Anastasiya Vouchak

Abstract:

On the average snow covers Belarus for 75 days in the south-west and 125 days in the north-east. During the cold season snowpack often destroys due to thaws, especially at the beginning and end of winter. Over 50% of thawing days have a positive mean daily temperature, which results in complete snow melting. For instance, in December 10% of thaws occur at 4 С mean daily temperature. Stable snowpack lying for over a month forms in the north-east in the first decade of December but in the south-west in the third decade of December. The cover disappears in March: in the north-east in the last decade but in the south-west in the first decade. This research takes into account that precipitation falling during a cold season could be not only liquid and solid but also a mixed type (about 10-15 % a year). Another important feature of snow cover is its density. In Belarus, the density of freshly fallen snow ranges from 0.08-0.12 g/cm³ in the north-east to 0.12-0.17 g/cm³ in the south-west. Over time, snow settles under its weight and after melting and refreezing. Averaged annual density of snow at the end of January is 0.23-0.28 g/сm³, in February – 0.25-0.30 g/сm³, in March – 0.29-0.36 g/сm³. Sometimes it can be over 0.50 g/сm³ if the snow melts too fast. The density of melting snow saturated with water can reach 0.80 g/сm³. Average maximum of snow depth is 15-33 cm: minimum is in Brest, maximum is in Lyntupy. Maximum registered snow depth ranges within 40-72 cm. The water content in snowpack, as well as its depth and density, reaches its maximum in the second half of February – beginning of March. Spatial distribution of the amount of liquid in snow corresponds to the trend described above, i.e. it increases in the direction from south-west to north-east and on the highlands. Average annual value of maximum water content in snow ranges from 35 mm in the south-west to 80-100 mm in the north-east. The water content in snow is over 80 mm on the central Belarusian highland. In certain years it exceeds 2-3 times the average annual values. Moderate water content in snow (80-95 mm) is characteristic of western highlands. Maximum water content in snow varies over the country from 107 mm (Brest) to 207 mm (Novogrudok). Maximum water content in snow varies significantly in time (in years), which is confirmed by high variation coefficient (Cv). Maximums (0.62-0.69) are in the south and south-west of Belarus. Minimums (0.42-0.46) are in central and north-eastern Belarus where snow cover is more stable. Since 1987 most gauge stations in Belarus have observed a trend to a decrease in water content in snow. It is confirmed by the research. The biggest snow cover forms on the highlands in central and north-eastern Belarus. Novogrudok, Minsk, Volkovysk, and Sventayny highlands are a natural orographic barrier which prevents snow-bringing air masses from penetrating inside the country. The research is based on data from gauge stations in Belarus registered from 1944 to 2014.

Keywords: density, depth, snow, water content in snow

Procedia PDF Downloads 161
153 Smart Irrigation System for Applied Irrigation Management in Tomato Seedling Production

Authors: Catariny C. Aleman, Flavio B. Campos, Matheus A. Caliman, Everardo C. Mantovani

Abstract:

The seedling production stage is a critical point in the vegetable production system. Obtaining high-quality seedlings is a prerequisite for subsequent cropping to occur well and productivity optimization is required. The water management is an important step in agriculture production. The adequate water requirement in horticulture seedlings can provide higher quality and increase field production. The practice of irrigation is indispensable and requires a duly adjusted quality irrigation system, together with a specific water management plan to meet the water demand of the crop. Irrigation management in seedling management requires a great deal of specific information, especially when it involves the use of inputs such as hydrorentering polymers and automation technologies of the data acquisition and irrigation system. The experiment was conducted in a greenhouse at the Federal University of Viçosa, Viçosa - MG. Tomato seedlings (Lycopersicon esculentum Mill) were produced in plastic trays of 128 cells, suspended at 1.25 m from the ground. The seedlings were irrigated by 4 micro sprinklers of fixed jet 360º per tray, duly isolated by sideboards, following the methodology developed for this work. During Phase 1, in January / February 2017 (duration of 24 days), the cultivation coefficient (Kc) of seedlings cultured in the presence and absence of hydrogel was evaluated by weighing lysimeter. In Phase 2, September 2017 (duration of 25 days), the seedlings were submitted to 4 irrigation managements (Kc, timer, 0.50 ETo, and 1.00 ETo), in the presence and absence of hydrogel and then evaluated in relation to quality parameters. The microclimate inside the greenhouse was monitored with the use of air temperature, relative humidity and global radiation sensors connected to a microcontroller that performed hourly calculations of reference evapotranspiration by Penman-Monteith standard method FAO56 modified for the balance of long waves according to Walker, Aldrich, Short (1983), and conducted water balance and irrigation decision making for each experimental treatment. Kc of seedlings cultured on a substrate with hydrogel (1.55) was higher than Kc on a pure substrate (1.39). The use of the hydrogel was a differential for the production of earlier tomato seedlings, with higher final height, the larger diameter of the colon, greater accumulation of a dry mass of shoot, a larger area of crown projection and greater the rate of relative growth. The handling 1.00 ETo promoted higher relative growth rate.

Keywords: automatic system; efficiency of water use; precision irrigation, micro sprinkler.

Procedia PDF Downloads 116
152 Homeless Population Modeling and Trend Prediction Through Identifying Key Factors and Machine Learning

Authors: Shayla He

Abstract:

Background and Purpose: According to Chamie (2017), it’s estimated that no less than 150 million people, or about 2 percent of the world’s population, are homeless. The homeless population in the United States has grown rapidly in the past four decades. In New York City, the sheltered homeless population has increased from 12,830 in 1983 to 62,679 in 2020. Knowing the trend on the homeless population is crucial at helping the states and the cities make affordable housing plans, and other community service plans ahead of time to better prepare for the situation. This study utilized the data from New York City, examined the key factors associated with the homelessness, and developed systematic modeling to predict homeless populations of the future. Using the best model developed, named HP-RNN, an analysis on the homeless population change during the months of 2020 and 2021, which were impacted by the COVID-19 pandemic, was conducted. Moreover, HP-RNN was tested on the data from Seattle. Methods: The methodology involves four phases in developing robust prediction methods. Phase 1 gathered and analyzed raw data of homeless population and demographic conditions from five urban centers. Phase 2 identified the key factors that contribute to the rate of homelessness. In Phase 3, three models were built using Linear Regression, Random Forest, and Recurrent Neural Network (RNN), respectively, to predict the future trend of society's homeless population. Each model was trained and tuned based on the dataset from New York City for its accuracy measured by Mean Squared Error (MSE). In Phase 4, the final phase, the best model from Phase 3 was evaluated using the data from Seattle that was not part of the model training and tuning process in Phase 3. Results: Compared to the Linear Regression based model used by HUD et al (2019), HP-RNN significantly improved the prediction metrics of Coefficient of Determination (R2) from -11.73 to 0.88 and MSE by 99%. HP-RNN was then validated on the data from Seattle, WA, which showed a peak %error of 14.5% between the actual and the predicted count. Finally, the modeling results were collected to predict the trend during the COVID-19 pandemic. It shows a good correlation between the actual and the predicted homeless population, with the peak %error less than 8.6%. Conclusions and Implications: This work is the first work to apply RNN to model the time series of the homeless related data. The Model shows a close correlation between the actual and the predicted homeless population. There are two major implications of this result. First, the model can be used to predict the homeless population for the next several years, and the prediction can help the states and the cities plan ahead on affordable housing allocation and other community service to better prepare for the future. Moreover, this prediction can serve as a reference to policy makers and legislators as they seek to make changes that may impact the factors closely associated with the future homeless population trend.

Keywords: homeless, prediction, model, RNN

Procedia PDF Downloads 121