Search results for: pearson correlation coefficient
2650 Design and Analysis of Enhanced Heat Transfer Kit for Plate Type Heat Exchanger
Authors: Muhammad Shahrukh Saeed, Syed Ahmad Nameer, Shafiq Ur Rehman, Aisha Jillani
Abstract:
Heat exchangers play a critical role in industrial applications of thermal systems. Its physical size and performance are vital parameters; therefore enhancement of heat transfer through different techniques remained a major research area for both academia and industry. This research reports the main purpose of heat exchanger with better kit design which plays a vital role during the process of heat transfer. Plate type heat exchanger mainly requires a design in which the plates can be easily be installed and removed without having any problem with the plates. For the flow of the fluid within the heat exchanger, it requires a flow should be fully developed. As natural laws allows the driving energy of the system to flow until equilibrium is achieved. As with a plate type heat exchanger heat the heat penetrates the surface which separates the hot medium with the cold one very easily. As some of the precautions should be considered while taking the heat exchanger accountable like heat should transfer from hot medium to cold, there should always be difference in temperature present and heat loss from hot body should be equal to the heat gained by the cold body regardless of the losses present to the surroundings. Aluminum plates of same grade are used in all experiments to ensure similarity. Size of all plates was 254 mm X 100 mm and thickness was taken as 5 mm.Keywords: heat transfer coefficient, aluminium, entry length, design
Procedia PDF Downloads 3332649 Effect of Arsenic Treatment on Element Contents of Sunflower, Growing in Nutrient Solution
Authors: Szilvia Várallyay, Szilvia Veres, Éva Bódi, Farzaneh Garousi, Béla Kovács
Abstract:
The agricultural environment is contaminated with heavy metals and other toxic elements, which means more and more threats. One of the most important toxic element is the arsenic. Consequences of arsenic toxicity in the plant organism is decreases the weight of the roots, and causes discoloration and necrosis of leaves. The toxicity of arsenic depends on the quality and quantity of the arsenic specialization. The arsenic in the soil and in the plant presents as a most hazardous specialization. A dicotyledon plant were chosen for the experiment, namely sunflower. The sunflower plants were grown in nutrient solution in different As(III) levels. The content of As, P, Fe were measured from experimental plants, using by ICP-MS.Negative correlation was observed between the higher concentration of As(V) and As(III) in the nutrition solution and the content of P in the sunflower tissue. The amount of Fe was decreasing if we used a higher concentration of arsenic (30 mg kg-1). We can tell the conclusion that the arsenic had a negative effect on the sunflower tissue P and Fe content.Keywords: arsenic, sunflower, ICP-MS, toxicity
Procedia PDF Downloads 6482648 Greenhouse Gas Emissions from a Tropical Eutrophic Freshwater Wetland
Authors: Juan P. Silva, T. R. Canchala, H. J. Lubberding, E. J. Peña, H. J. Gijzen
Abstract:
This study measured the fluxes of greenhouse gases (GHGs) i.e. CO2, CH4 and N2O from a tropical eutrophic freshwater wetland (“Sonso Lagoon”) which receives input loading nutrient from several sources i.e. agricultural run-off, domestic sewage, and a polluted river. The flux measurements were carried out at four different points using the static chamber technique. CO2 fluxes ranged from -8270 to 12210 mg.m-2.d-1 (median = 360; SD = 4.11; n = 50), CH4 ranged between 0.2 and 5270 mg.m-2.d-1 (median = 60; SD = 1.27; n = 45), and N2O ranged from -31.12 to 15.4 mg N2O m-2.d-1 (median = 0.05; SD = 9.36; n = 42). Although some negative fluxes were observed in the zone dominated by floating plants i.e. Eichornia crassipes, Salvinia sp., and Pistia stratiotes L., the mean values indicated that the Sonso Lagoon was a net source of CO2, CH4 and N2O. In addition, an effect of the eutrophication on GHG emissions could be observed in the positive correlation found between CO2, CH4 and N2O generation and COD, PO4-3, NH3-N, TN and NO3-N. The eutrophication impact on GHG production highlights the necessity to limit the anthropic activities on freshwater wetlands.Keywords: eutrophication, greenhouse gas emissions, freshwater wetlands, climate change
Procedia PDF Downloads 3612647 Classification of Computer Generated Images from Photographic Images Using Convolutional Neural Networks
Authors: Chaitanya Chawla, Divya Panwar, Gurneesh Singh Anand, M. P. S Bhatia
Abstract:
This paper presents a deep-learning mechanism for classifying computer generated images and photographic images. The proposed method accounts for a convolutional layer capable of automatically learning correlation between neighbouring pixels. In the current form, Convolutional Neural Network (CNN) will learn features based on an image's content instead of the structural features of the image. The layer is particularly designed to subdue an image's content and robustly learn the sensor pattern noise features (usually inherited from image processing in a camera) as well as the statistical properties of images. The paper was assessed on latest natural and computer generated images, and it was concluded that it performs better than the current state of the art methods.Keywords: image forensics, computer graphics, classification, deep learning, convolutional neural networks
Procedia PDF Downloads 3372646 GIS Application in Surface Runoff Estimation for Upper Klang River Basin, Malaysia
Authors: Suzana Ramli, Wardah Tahir
Abstract:
Estimation of surface runoff depth is a vital part in any rainfall-runoff modeling. It leads to stream flow calculation and later predicts flood occurrences. GIS (Geographic Information System) is an advanced and opposite tool used in simulating hydrological model due to its realistic application on topography. The paper discusses on calculation of surface runoff depth for two selected events by using GIS with Curve Number method for Upper Klang River basin. GIS enables maps intersection between soil type and land use that later produces curve number map. The results show good correlation between simulated and observed values with more than 0.7 of R2. Acceptable performance of statistical measurements namely mean error, absolute mean error, RMSE, and bias are also deduced in the paper.Keywords: surface runoff, geographic information system, curve number method, environment
Procedia PDF Downloads 2822645 Floodplain Modeling of River Jhelum Using HEC-RAS: A Case Study
Authors: Kashif Hassan, M.A. Ahanger
Abstract:
Floods have become more frequent and severe due to effects of global climate change and human alterations of the natural environment. Flood prediction/ forecasting and control is one of the greatest challenges facing the world today. The forecast of floods is achieved by the use of hydraulic models such as HEC-RAS, which are designed to simulate flow processes of the surface water. Extreme flood events in river Jhelum , lasting from a day to few are a major disaster in the State of Jammu and Kashmir, India. In the present study HEC-RAS model was applied to two different reaches of river Jhelum in order to estimate the flood levels corresponding to 25, 50 and 100 year return period flood events at important locations and to deduce flood vulnerability of important areas and structures. The flow rates for the two reaches were derived from flood-frequency analysis of 50 years of historic peak flow data. Manning's roughness coefficient n was selected using detailed analysis. Rating Curves were also generated to serve as base for determining the boundary conditions. Calibration and Validation procedures were applied in order to ensure the reliability of the model. Sensitivity analysis was also performed in order to ensure the accuracy of Manning's n in generating water surface profiles.Keywords: flood plain, HEC-RAS, Jhelum, return period
Procedia PDF Downloads 4262644 Study and Modeling of Flood Watershed in Arid and Semi Arid Regions of Algeria
Authors: Belagoune Fares, Boutoutaou Djamel
Abstract:
The study on floods in Algeria established by the National Agency of Water Resources (ANRH) shows that the country is confronted with the phenomenon of very destructive floods and floods especially in arid and semiarid regions. Flooding of rivers in these areas is less known. They are characterized by their sudden duration (rain showers, thunderstorm).The duration of the flood is of the order of minutes to hours. The human and material damage caused by these floods were still high. The study area encompasses three watersheds in semi-arid and arid south and Algeria. THERE are pools of Chott-Melghir (68,751 km2), highland Constantine-07 (9578 km2) and El Hodna-05 basin (25,843 km2). The total area of this zone is about 104,500km2.Studies of protection against floods and design studies of hydraulic structures (spillway, storm basin, etc.) require the raw data which is often unknown in several places particularly at ungauged wadis of these areas. This makes it very difficult to schedules and managers working in the field of hydraulic studies. The objective of this study and propose a methodology for determining flows in the absence of observations in the semi-arid and arid south eastern Algeria. The objective of the study is to propose a methodology for these areas of flood calculation for ungauged rivers.Keywords: flood, watershed, specific flow, coefficient of variation, arid
Procedia PDF Downloads 5062643 Retail Managers’ Perception on Coca-Cola Company’s Success of Glass Package Recovery and Recycling in Nairobi, Kenya
Authors: Brigitte Wabuyabo-Okonga
Abstract:
Little research has been done to establish the level of success of Coca Cola Company in recycling and reusing their glass bottles. This paper attempts to establish retail managers’ perception of the company’s self acclaimed success. Retail managers of supermarkets in the CBD of Nairobi, Kenya were considered for the study. Data were collected through questionnaires and analyzed using descriptive (mean, frequencies and percentages) and inferential statistics (correlation analysis) were used to analyze the data. The study found out that there is relative success although a lot needs to be done. For example, improving in communicating policy issues and in practice enhance the actual collection of broken and/or non-broken Coca Cola Company glass bottles through providing drop-off points in open areas such as on the streets and in parks.Keywords: Coca Cola Company glass bottles, Kenya, Nairobi, packaging, retail manager
Procedia PDF Downloads 3132642 Numerical Modeling to Validate Theoretical Models of Toppling Failure in Rock Slopes
Authors: Hooman Dabirmanesh, Attila M. Zsaki
Abstract:
Traditionally, rock slope stability is carried out using limit equilibrium analysis when investigating toppling failure. In these equilibrium methods, internal forces exerted between columns are not clearly defined, and to the authors’ best knowledge, there is no consensus in literature with respect to the results of analysis. A discrete element method-based numerical model was developed and applied to simulate the behavior of rock layers subjected to toppling failure. Based on this calibrated numerical model, a study of the location and distribution of internal forces that result in equilibrium was carried out. The sum of side forces was applied at a point on a block which properly represents the force to determine the inter-column force distribution. In terms of the side force distribution coefficient, the result was compared to those obtained from laboratory centrifuge tests. The results of the simulation show the suitable criteria to select the correct position for the internal exerted force between rock layers. In addition, the numerical method demonstrates how a theoretical method could be reliable by considering the interaction between the rock layers.Keywords: contact bond, discrete element, force distribution, limit equilibrium, tensile stress
Procedia PDF Downloads 1432641 Application of the Quantile Regression Approach to the Heterogeneity of the Fine Wine Prices
Authors: Charles-Olivier Amédée-Manesme, Benoit Faye, Eric Le Fur
Abstract:
In this paper, the heterogeneity of the Bordeaux Legends 50 wine market price segment is addressed. For this purpose, quantile regression is applied – with market segmentation based on wine bottle price quantile – and the hedonic price of wine attributes is computed for various price segments of the market. The approach is applied to a major privately held data set which consists of approximately 30,000 transactions over the 2003–2014 period. The findings suggest that the relative hedonic prices of several wine attributes differ significantly among deciles. In particular, the elasticity coefficient of the expert ratings shows strong variation among prices. If - as suggested in the literature - expert ratings have a positive influence on wine price on average, they have a clearly decreasing impact over the quantiles. Finally, the lower the wine price, the higher the potential for price appreciation over time. Other variables such as chateaux or vintage are also shown to vary across the distribution of wine prices. While enhancing our understanding of the complex market dynamics that underlie Bordeaux wines’ price, this research provides empirical evidence that the QR approach adequately captures heterogeneity among wine price ranges, which simultaneously applies to wine stock, vintage and auctions’ house.Keywords: hedonics, market segmentation, quantile regression, heterogeneity, wine economics
Procedia PDF Downloads 3402640 An Optimal Approach for Full-Detailed Friction Model Identification of Reaction Wheel
Authors: Ghasem Sharifi, Hamed Shahmohamadi Ousaloo, Milad Azimi, Mehran Mirshams
Abstract:
The ever-increasing use of satellites demands a search for increasingly accurate and reliable pointing systems. Reaction wheels are rotating devices used commonly for the attitude control of the spacecraft since provide a wide range of torque magnitude and high reliability. The numerical modeling of this device can significantly enhance the accuracy of the satellite control in space. Modeling the wheel rotation in the presence of the various frictions is one of the critical parts of this approach. This paper presents a Dynamic Model Control of a Reaction Wheel (DMCR) in the current control mode. In current-mode, the required current is delivered to the coils in order to achieve the desired torque. During this research, all the friction parameters as viscous and coulomb, motor coefficient, resistance and voltage constant are identified. In order to model identification of a reaction wheel, numerous varying current commands apply on the particular wheel to verify the estimated model. All the parameters of DMCR are identified by classical Levenberg-Marquardt (CLM) optimization method. The experimental results demonstrate that the developed model has an appropriate precise and can be used in the satellite control simulation.Keywords: experimental modeling, friction parameters, model identification, reaction wheel
Procedia PDF Downloads 2332639 Performativity and Valuation Techniques: Evidence from Investment Banks in the Wake of the Global Financial Crisis
Authors: Alicja Reuben, Amira Annabi
Abstract:
In this paper, we explore the relationship between the selection of valuation techniques by investment banks and the banks’ risk perceptions and performance in the context of the theory of performativity. We use inferential statistics to study these relationships by building a unique dataset based on the disclosure of 12 investment banks’ 2012-2015 annual financial statements. Moreover, we create two constructs, namely intensity of use and risk perception. We measure the intensity of use as a frequency metric of how often a particular bank adopts valuation techniques for a particular asset or liability. We measure risk perception based on disclosed ranges of values for unobservable inputs. Our results are twofold: we find a significant negative correlation between (1) intensity of use and investment bank performance and (2) intensity of use and risk perception. These results indicate that a performative process takes place, and the valuation techniques are enacting their environment.Keywords: language, linguistics, performativity, financial techniques
Procedia PDF Downloads 1602638 Bioinformatic Prediction of Hub Genes by Analysis of Signaling Pathways, Transcriptional Regulatory Networks and DNA Methylation Pattern in Colon Cancer
Authors: Ankan Roy, Niharika, Samir Kumar Patra
Abstract:
Anomalous nexus of complex topological assemblies and spatiotemporal epigenetic choreography at chromosomal territory may forms the most sophisticated regulatory layer of gene expression in cancer. Colon cancer is one of the leading malignant neoplasms of the lower gastrointestinal tract worldwide. There is still a paucity of information about the complex molecular mechanisms of colonic cancerogenesis. Bioinformatics prediction and analysis helps to identify essential genes and significant pathways for monitoring and conquering this deadly disease. The present study investigates and explores potential hub genes as biomarkers and effective therapeutic targets for colon cancer treatment. Colon cancer patient sample containing gene expression profile datasets, such as GSE44076, GSE20916, and GSE37364 were downloaded from Gene Expression Omnibus (GEO) database and thoroughly screened using the GEO2R tool and Funrich software to find out common 2 differentially expressed genes (DEGs). Other approaches, including Gene Ontology (GO) and KEGG pathway analysis, Protein-Protein Interaction (PPI) network construction and hub gene investigation, Overall Survival (OS) analysis, gene correlation analysis, methylation pattern analysis, and hub gene-Transcription factors regulatory network construction, were performed and validated using various bioinformatics tool. Initially, we identified 166 DEGs, including 68 up-regulated and 98 down-regulated genes. Up-regulated genes are mainly associated with the Cytokine-cytokine receptor interaction, IL17 signaling pathway, ECM-receptor interaction, Focal adhesion and PI3K-Akt pathway. Downregulated genes are enriched in metabolic pathways, retinol metabolism, Steroid hormone biosynthesis, and bile secretion. From the protein-protein interaction network, thirty hub genes with high connectivity are selected using the MCODE and cytoHubba plugin. Survival analysis, expression validation, correlation analysis, and methylation pattern analysis were further verified using TCGA data. Finally, we predicted COL1A1, COL1A2, COL4A1, SPP1, SPARC, and THBS2 as potential master regulators in colonic cancerogenesis. Moreover, our experimental data highlights that disruption of lipid raft and RAS/MAPK signaling cascade affects this gene hub at mRNA level. We identified COL1A1, COL1A2, COL4A1, SPP1, SPARC, and THBS2 as determinant hub genes in colon cancer progression. They can be considered as biomarkers for diagnosis and promising therapeutic targets in colon cancer treatment. Additionally, our experimental data advertise that signaling pathway act as connecting link between membrane hub and gene hub.Keywords: hub genes, colon cancer, DNA methylation, epigenetic engineering, bioinformatic predictions
Procedia PDF Downloads 1282637 Numerical Investigation of Divergence and Rib Orientation Effects on Thermal Performance in a Divergent Duct, as an Application of Inner Cooling of Turbine Blades
Authors: Heidar Jafarizadeh, Hossein Keshtkar, Ahmad Sohankar
Abstract:
Heat transfer and turbulent flow structure have been studied in a divergent ribbed duct with a varying duct geometry with Reynolds numbers of 7000 to 90000 using numerical methods. In this study, we confirmed our numerical results of a ribbed duct with an Initial slope of zero to 3 degree by comparing them to experimental data we had and investigated the impact of the ducts divergence on heat transfer and flow pattern in the 2-dimensional flow. Then we investigated the effect of tilting the ribs, on heat transfer and flow behavior. We achieved this by changing the ribs angles from a range of 40 to 75 degrees in a divergent duct and simulated the flow in 3-dimensions. Our results show that with an increase in duct divergence, heat transfer increases linearly and the coefficient of friction increases exponentially. As the results show, a duct with a divergence angle of 1.5 degree presents better thermal performance in comparison with all the angle range’s we studied. Besides, a ribbed duct with 40 degree rib orientation had the best thermal performance considering the simultaneous effects of pressure drop and heat transfer which were imposed on it.Keywords: divergent ribbed duct, heat transfer, thermal performance, turbulent flow structure
Procedia PDF Downloads 3022636 Procedure for Monitoring the Process of Behavior of Thermal Cracking in Concrete Gravity Dams: A Case Study
Authors: Adriana de Paula Lacerda Santos, Bruna Godke, Mauro Lacerda Santos Filho
Abstract:
Several dams in the world have already collapsed, causing environmental, social and economic damage. The concern to avoid future disasters has stimulated the creation of a great number of laws and rules in many countries. In Brazil, Law 12.334/2010 was created, which establishes the National Policy on Dam Safety. Overall, this policy requires the dam owners to invest in the maintenance of their structures and to improve its monitoring systems in order to provide faster and straightforward responses in the case of an increase of risks. As monitoring tools, visual inspections has provides comprehensive assessment of the structures performance, while auscultation’s instrumentation has added specific information on operational or behavioral changes, providing an alarm when a performance indicator exceeds the acceptable limits. These limits can be set using statistical methods based on the relationship between instruments measures and other variables, such as reservoir level, time of the year or others instruments measuring. Besides the design parameters (uplift of the foundation, displacements, etc.) the dam instrumentation can also be used to monitor the behavior of defects and damage manifestations. Specifically in concrete gravity dams, one of the main causes for the appearance of cracks, are the concrete volumetric changes generated by the thermal origin phenomena, which are associated with the construction process of these structures. Based on this, the goal of this research is to propose a monitoring process of the thermal cracking behavior in concrete gravity dams, through the instrumentation data analysis and the establishment of control values. Therefore, as a case study was selected the Block B-11 of José Richa Governor Dam Power Plant, that presents a cracking process, which was identified even before filling the reservoir in August’ 1998, and where crack meters and surface thermometers were installed for its monitoring. Although these instruments were installed in May 2004, the research was restricted to study the last 4.5 years (June 2010 to November 2014), when all the instruments were calibrated and producing reliable data. The adopted method is based on simple linear correlations procedures to understand the interactions among the instruments time series, verifying the response times between them. The scatter plots were drafted from the best correlations, which supported the definition of the limit control values. Among the conclusions, it is shown that there is a strong or very strong correlation between ambient temperature and the crack meters and flowmeters measurements. Based on the results of the statistical analysis, it was possible to develop a tool for monitoring the behavior of the case study cracks. Thus it was fulfilled the goal of the research to develop a proposal for a monitoring process of the behavior of thermal cracking in concrete gravity dams.Keywords: concrete gravity dam, dams safety, instrumentation, simple linear correlation
Procedia PDF Downloads 2922635 Design and Analysis of Adaptive Type-I Progressive Hybrid Censoring Plan under Step Stress Partially Accelerated Life Testing Using Competing Risk
Authors: Ariful Islam, Showkat Ahmad Lone
Abstract:
Statistical distributions have long been employed in the assessment of semiconductor devices and product reliability. The power function-distribution is one of the most important distributions in the modern reliability practice and can be frequently preferred over mathematically more complex distributions, such as the Weibull and the lognormal, because of its simplicity. Moreover, it may exhibit a better fit for failure data and provide more appropriate information about reliability and hazard rates in some circumstances. This study deals with estimating information about failure times of items under step-stress partially accelerated life tests for competing risk based on adoptive type-I progressive hybrid censoring criteria. The life data of the units under test is assumed to follow Mukherjee-Islam distribution. The point and interval maximum-likelihood estimations are obtained for distribution parameters and tampering coefficient. The performances of the resulting estimators of the developed model parameters are evaluated and investigated by using a simulation algorithm.Keywords: adoptive progressive hybrid censoring, competing risk, mukherjee-islam distribution, partially accelerated life testing, simulation study
Procedia PDF Downloads 3472634 Opportunity Integrated Assessment Facilitating Critical Thinking and Science Process Skills Measurement on Acid Base Matter
Authors: Anggi Ristiyana Puspita Sari, Suyanta
Abstract:
To recognize the importance of the development of critical thinking and science process skills, the instrument should give attention to the characteristics of chemistry. Therefore, constructing an accurate instrument for measuring those skills is important. However, the integrated instrument assessment is limited in number. The purpose of this study is to validate an integrated assessment instrument for measuring students’ critical thinking and science process skills on acid base matter. The development model of the test instrument adapted McIntire model. The sample consisted of 392 second grade high school students in the academic year of 2015/2016 in Yogyakarta. Exploratory factor analysis (EFA) was conducted to explore construct validity, whereas content validity was substantiated by Aiken’s formula. The result shows that the KMO test is 0.714 which indicates sufficient items for each factor and the Bartlett test is significant (a significance value of less than 0.05). Furthermore, content validity coefficient which is based on 8 expert judgments is obtained at 0.85. The findings support the integrated assessment instrument to measure critical thinking and science process skills on acid base matter.Keywords: acid base matter, critical thinking skills, integrated assessment instrument, science process skills, validity
Procedia PDF Downloads 3232633 Starting the Hospitalization Procedure with a Medicine Combination in the Cardiovascular Department of the Imam Reza (AS) Mashhad Hospital
Authors: Maryamsadat Habibi
Abstract:
Objective: pharmaceutical errors are avoidable occurrences that can result in inappropriate pharmaceutical use, patient harm, treatment failure, increased hospital costs and length of stay, and other outcomes that affect both the individual receiving treatment and the healthcare provider. This study aimed to perform a reconciliation of medications in the cardiovascular ward of Imam Reza Hospital in Mashhad, Iran, and evaluate the prevalence of medication discrepancies between the best medication list created for the patient by the pharmacist and the medication order of the treating physician there. Materials & Methods: The 97 patients in the cardiovascular ward of the Imam Reza Hospital in Mashhad were the subject of a cross-sectional study from June to September of 2021. After giving their informed consent and being admitted to the ward, all patients with at least one underlying condition and at least two medications being taken at home were included in the study. A medical reconciliation form was used to record patient demographics and medical histories during the first 24 hours of admission, and the information was contrasted with the doctors' orders. The doctor then discovered medication inconsistencies between the two lists and double-checked them to separate the intentional from the accidental anomalies. Finally, using SPSS software version 22, it was determined how common medical discrepancies are and how different sorts of discrepancies relate to various variables. Results: The average age of the participants in this study was 57.6915.84 years, with 57.7% of men and 42.3% of women. 95.9% of the patients among these people encountered at least one medication discrepancy, and 58.9% of them suffered at least one unintentional drug cessation. Out of the 659 medications registered in the study, 399 cases (60.54%) had inconsistencies, of which 161 cases (40.35%) involved the intentional stopping of a medication, 123 cases (30.82%) involved the stopping of a medication unintentionally, and 115 cases (28.82%) involved the continued use of a medication by adjusting the dose. Additionally, the category of cardiovascular pharmaceuticals and the category of gastrointestinal medications were found to have the highest medical inconsistencies in the current study. Furthermore, there was no correlation between the frequency of medical discrepancies and the following variables: age, ward, date of visit, type, and number of underlying diseases (P=0.13), P=0.61, P=0.72, P=0.82, P=0.44, and so forth. On the other hand, there was a statistically significant correlation between the number of medications taken at home (P=0.037) and the prevalence of medical discrepancies with gender (P=0.029). The results of this study revealed that 96% of patients admitted to the cardiovascular unit at Imam Reza Hospital had at least one medication error, which was typically an intentional drug discontinuance. According to the study's findings, patients admitted to Imam Reza Hospital's cardiovascular ward have a great potential for identifying and correcting various medication discrepancies as well as for avoiding prescription errors when the medication reconciliation method is used. As a result, it is essential to carry out a precise assessment to achieve the best treatment outcomes and avoid unintended medication discontinuation, unwanted drug-related events, and drug interactions between the patient's home medications and those prescribed in the hospital.Keywords: drug combination, drug side effects, drug incompatibility, cardiovascular department
Procedia PDF Downloads 902632 Psycho-Social Issues: Drug Use and Abuse as a Social Problem among Secondary School Youths in Urban Centres of Benue State, Nigeria
Authors: Ode Kenneth Ogbu
Abstract:
This study was designed as a survey to investigate the incidence of use and abuse of drug as a social problem among the Nigeria youths in the secondary schools in urban centres of Benue state. 500 SS 3 and fresh secondary school graduates in remedial science class of Benue State University Makurdi with mean age of 16.8 were randomly sampled for the study. An instrument called drug use and abuse perception questionnaire (DAPQ) with a reliability coefficient of 74 were administered to the students. Only 337 copies of the questionnaire were properly completed and returned which reduced the sample size of 337. The data were subjected to factor analysis. X2 statistic and frequency distribution using split half method. The result of the analysis showed that: the DAPQ yield seven baseline factors responsible for drug use and abuse; there was appreciable evidence that the study subjects used drugs (42.1%); alcohol topped the list of the drugs consumed; most students use their pocket money to buy drugs; drugs were purchased from unconventional, hidden places and 13 out of the 20 items of DAPQ were perceived as significant factors in drug use and abuse. The paper recommends proper intervention of government, parents and NGO’S among students to reduce cases of drug abuse.Keywords: drug abuse, psychology, psychiatry, students
Procedia PDF Downloads 3092631 Sugar-Induced Stabilization Effect of Protein Structure
Authors: Mitsuhiro Hirai, Satoshi Ajito, Nobutaka Shimizu, Noriyuki Igarashi, Hiroki Iwase, Shinichi Takata
Abstract:
Sugars and polyols are known to be bioprotectants preventing such as protein denaturation and enzyme deactivation and widely used as a nontoxic additive in various industrial and medical products. The mechanism of their protective actions has been explained by specific bindings between biological components and additives, changes in solvent viscosities, and surface tension and free energy changes upon transfer of those components into additive solutions. On the other hand, some organisms having tolerances against extreme environment produce stress proteins and/or accumulate sugars in cells, which is called cryptobiosis. In particular, trehalose has been drawing attention relevant to cryptobiosis under external stress such as high or low temperature, drying, osmotic pressure, and so on. The function of cryptobiosis by trehalose has been explained relevant to the restriction of the intra-and/or-inter-molecular movement by vitrification or from the replacement of water molecule by trehalose. Previous results suggest that the structure and interaction between sugar and water are a key determinant for understanding cryptobiosis. Recently, we have shown direct evidence that the protein hydration (solvation) and structural stability against chemical and thermal denaturation significantly depend on sugar species and glycerol. Sugar and glycerol molecules tend to be preferentially or weakly excluded from the protein surface and preserved the native protein hydration shell. Due to the protective action of the protein hydration shell by those molecules, the protein structure is stabilized against chemical (guanidinium chloride) and thermal denaturation. The protective action depends on sugar species. To understand the above trend and difference in detail, it is essentially important to clarify the characteristics of solutions containing those additives. In this study, by using wide-angle X-ray scattering technique covering a wide spatial region (~3-120 Å), we have clarified structures of sugar solutions with the concentration from 5% w/w to 65% w/w. The sugars measured in the present study were monosaccharides (glucose, fructose, mannose) and disaccharides (sucrose, trehalose, maltose). Due to observed scattering data with a wide spatial resolution, we have succeeded in obtaining information on the internal structure of individual sugar molecules and on the correlation between them. Every sugar gradually shortened the average inter-molecular distance as the concentration increased. The inter-molecular interaction between sugar molecules was essentially showed an exclusive tendency for every sugar, which appeared as the presence of a repulsive correlation hole. This trend was more weakly seen for trehalose compared to other sugars. The intermolecular distance and spread of individual molecule clearly showed the dependence of sugar species. We will discuss the relation between the characteristic of sugar solution and its protective action of biological materials.Keywords: hydration, protein, sugar, X-ray scattering
Procedia PDF Downloads 1562630 Continuous Differential Evolution Based Parameter Estimation Framework for Signal Models
Authors: Ammara Mehmood, Aneela Zameer, Muhammad Asif Zahoor Raja, Muhammad Faisal Fateh
Abstract:
In this work, the strength of bio-inspired computational intelligence based technique is exploited for parameter estimation for the periodic signals using Continuous Differential Evolution (CDE) by defining an error function in the mean square sense. Multidimensional and nonlinear nature of the problem emerging in sinusoidal signal models along with noise makes it a challenging optimization task, which is dealt with robustness and effectiveness of CDE to ensure convergence and avoid trapping in local minima. In the proposed scheme of Continuous Differential Evolution based Signal Parameter Estimation (CDESPE), unknown adjustable weights of the signal system identification model are optimized utilizing CDE algorithm. The performance of CDESPE model is validated through statistics based various performance indices on a sufficiently large number of runs in terms of estimation error, mean squared error and Thiel’s inequality coefficient. Efficacy of CDESPE is examined by comparison with the actual parameters of the system, Genetic Algorithm based outcomes and from various deterministic approaches at different signal-to-noise ratio (SNR) levels.Keywords: parameter estimation, bio-inspired computing, continuous differential evolution (CDE), periodic signals
Procedia PDF Downloads 3022629 The Relationship between Self-Censorship and Satisfaction of Iran Newspaper's Readers, Case Study: Iran Newspaper
Authors: Elham Taghizade Sigarodi, Ani Mirzakhanian
Abstract:
Journalism atmosphere in present era is highly competitive so that what matters the most is “the speed of news broadcasting”. The first newspaper that lets out the news is therefore of higher validity. The value of the news is in fact in its truthfulness. Expressing the facts and reality is an accepted norm in professional media arena and it is as well considered the acceptable and trustworthy language for journalism. However, different conditions generate self-censorship. The present study seeks to explore the relationship between self-censorship and satisfaction of Iran newspaper’s readers. Thus, the statistical population including journalists of Iran newspaper for Tehran’s readers was estimated 384 persons based on Morgan table. Through cluster sampling, 50 journalists were selected so that totally the sample size was 434 persons and questionnaire was applied for data analysis and based on Alpha Chronbache, it was supported. Through Pierson correlation, the main and all subsidiary hypotheses were supported except the forth one.Keywords: newspaper, satisfaction of audiences, self-censorship, journalists
Procedia PDF Downloads 2592628 Flood Predicting in Karkheh River Basin Using Stochastic ARIMA Model
Authors: Karim Hamidi Machekposhti, Hossein Sedghi, Abdolrasoul Telvari, Hossein Babazadeh
Abstract:
Floods have huge environmental and economic impact. Therefore, flood prediction is given a lot of attention due to its importance. This study analysed the annual maximum streamflow (discharge) (AMS or AMD) of Karkheh River in Karkheh River Basin for flood predicting using ARIMA model. For this purpose, we use the Box-Jenkins approach, which contains four-stage method model identification, parameter estimation, diagnostic checking and forecasting (predicting). The main tool used in ARIMA modelling was the SAS and SPSS software. Model identification was done by visual inspection on the ACF and PACF. SAS software computed the model parameters using the ML, CLS and ULS methods. The diagnostic checking tests, AIC criterion, RACF graph and RPACF graphs, were used for selected model verification. In this study, the best ARIMA models for Annual Maximum Discharge (AMD) time series was (4,1,1) with their AIC value of 88.87. The RACF and RPACF showed residuals’ independence. To forecast AMD for 10 future years, this model showed the ability of the model to predict floods of the river under study in the Karkheh River Basin. Model accuracy was checked by comparing the predicted and observation series by using coefficient of determination (R2).Keywords: time series modelling, stochastic processes, ARIMA model, Karkheh river
Procedia PDF Downloads 2872627 Comparison of Cu Nanoparticle Formation and Properties with and without Surrounding Dielectric
Authors: P. Dubcek, B. Pivac, J. Dasovic, V. Janicki, S. Bernstorff
Abstract:
When grown only to nanometric sizes, metallic particles (e.g. Ag, Au and Cu) exhibit specific optical properties caused by the presence of plasmon band. The plasmon band represents collective oscillation of the conduction electrons, and causes a narrow band absorption of light in the visible range. When the nanoparticles are embedded in a dielectric, they also cause modifications of dielectrics optical properties. This can be fine-tuned by tuning the particle size. We investigated Cu nanoparticle growth with and without surrounding dielectric (SiO2 capping layer). The morphology and crystallinity were investigated by GISAXS and GIWAXS, respectively. Samples were produced by high vacuum thermal evaporation of Cu onto monocrystalline silicon substrate held at room temperature, 100°C or 180°C. One series was in situ capped by 10nm SiO2 layer. Additionally, samples were annealed at different temperatures up to 550°C, also in high vacuum. The room temperature deposited samples annealed at lower temperatures exhibit continuous film structure: strong oscillations in the GISAXS intensity are present especially in the capped samples. At higher temperatures enhanced surface dewetting and Cu nanoparticles (nanoislands) formation partially destroy the flatness of the interface. Therefore the particle type of scattering is enhanced, while the film fringes are depleted. However, capping layer hinders particle formation, and continuous film structure is preserved up to higher annealing temperatures (visible as strong and persistent fringes in GISAXS), compared to the non- capped samples. According to GISAXS, lateral particle sizes are reduced at higher temperatures, while particle height is increasing. This is ascribed to close packing of the formed particles at lower temperatures, and GISAXS deduced sizes are partially the result of the particle agglomerate dimensions. Lateral maxima in GISAXS are an indication of good positional correlation, and the particle to particle distance is increased as the particles grow with temperature elevation. This coordination is much stronger in the capped and lower temperature deposited samples. The dewetting is much more vigorous in the non-capped sample, and since nanoparticles are formed in a range of sizes, correlation is receding both with deposition and annealing temperature. Surface topology was checked by atomic force microscopy (AFM). Capped sample's surfaces were smoother and lateral size of the surface features were larger compared to the non-capped samples. Altogether, AFM results suggest somewhat larger particles and wider size distribution, and this can be attributed to the difference in probe size. Finally, the plasmonic effect was monitored by UV-Vis reflectance spectroscopy, and relative weak plasmonic effect could be explained by uncomplete dewetting or partial interconnection of the formed particles.Keywords: coper, GISAXS, nanoparticles, plasmonics
Procedia PDF Downloads 1232626 Modeling Visual Memorability Assessment with Autoencoders Reveals Characteristics of Memorable Images
Authors: Elham Bagheri, Yalda Mohsenzadeh
Abstract:
Image memorability refers to the phenomenon where certain images are more likely to be remembered by humans than others. It is a quantifiable and intrinsic attribute of an image. Understanding how visual perception and memory interact is important in both cognitive science and artificial intelligence. It reveals the complex processes that support human cognition and helps to improve machine learning algorithms by mimicking the brain's efficient data processing and storage mechanisms. To explore the computational underpinnings of image memorability, this study examines the relationship between an image's reconstruction error, distinctiveness in latent space, and its memorability score. A trained autoencoder is used to replicate human-like memorability assessment inspired by the visual memory game employed in memorability estimations. This study leverages a VGG-based autoencoder that is pre-trained on the vast ImageNet dataset, enabling it to recognize patterns and features that are common to a wide and diverse range of images. An empirical analysis is conducted using the MemCat dataset, which includes 10,000 images from five broad categories: animals, sports, food, landscapes, and vehicles, along with their corresponding memorability scores. The memorability score assigned to each image represents the probability of that image being remembered by participants after a single exposure. The autoencoder is finetuned for one epoch with a batch size of one, attempting to create a scenario similar to human memorability experiments where memorability is quantified by the likelihood of an image being remembered after being seen only once. The reconstruction error, which is quantified as the difference between the original and reconstructed images, serves as a measure of how well the autoencoder has learned to represent the data. The reconstruction error of each image, the error reduction, and its distinctiveness in latent space are calculated and correlated with the memorability score. Distinctiveness is measured as the Euclidean distance between each image's latent representation and its nearest neighbor within the autoencoder's latent space. Different structural and perceptual loss functions are considered to quantify the reconstruction error. The results indicate that there is a strong correlation between the reconstruction error and the distinctiveness of images and their memorability scores. This suggests that images with more unique distinct features that challenge the autoencoder's compressive capacities are inherently more memorable. There is also a negative correlation between the reduction in reconstruction error compared to the autoencoder pre-trained on ImageNet, which suggests that highly memorable images are harder to reconstruct, probably due to having features that are more difficult to learn by the autoencoder. These insights suggest a new pathway for evaluating image memorability, which could potentially impact industries reliant on visual content and mark a step forward in merging the fields of artificial intelligence and cognitive science. The current research opens avenues for utilizing neural representations as instruments for understanding and predicting visual memory.Keywords: autoencoder, computational vision, image memorability, image reconstruction, memory retention, reconstruction error, visual perception
Procedia PDF Downloads 912625 Task Kicking Performance with Biomechanical Instrumentation
Authors: T. Hirata, M. G. Silva, L. M. Rosa
Abstract:
The balance ability during task kick in soccer is a determining factor in the execution of functional movements that require a high-performance motor coordination. The current experiment explored it during an instep soccer kick and functional task kicking. Their kicking performance was measured in terms of the sway characteristics using lateral and antero-posterior balance of the center of pressure (COP) for the supporting leg and the kinematic data, the supporting leg’s knee angle. The motion was realized with one-legged stance of five male indoor soccer players and using the trigger device ball controller. The results showed large balance in antero-posterior direction than in lateral direction. However, each player adopts a different way to kick the ball, and the media-lateral displacement of the COP showed no correlation with the balance skill.Keywords: kicking performance, center of pressure, one-legged stance, balance ability
Procedia PDF Downloads 6162624 Service Quality Improvement in Ghana's Healthcare Supply Chain
Authors: Ammatu Alhassan
Abstract:
Quality healthcare delivery is a crucial indicator in assessing the overall developmental status of a country. There are many limitations in the Ghanaian healthcare supply chain due to the lack of studies about the correlation between quality health service and the healthcare supply chain. Patients who visit various healthcare providers face unpleasant experiences such as delays in the availability of their medications. In this study, an assessment of the quality of services provided to Ghanaian outpatients who visit public healthcare providers was investigated to establish its effect on the healthcare supply chain using a conceptual model. The Donabedian’s structure, process, and outcome theory for service quality evaluation were used to analyse 20 Ghanaian hospitals. The data obtained was tested using the structural equation model (SEM). The findings from this research will help us to improve the overall quality of the Ghanaian healthcare supply chain. The model which will be developed will help us to understand better the linkage between quality healthcare and the healthcare supply chain as well as serving as a reference tool for future healthcare research in Ghana.Keywords: Ghana, healthcare, outpatients, supply chain
Procedia PDF Downloads 1862623 The Effect of Deficit Financing on Macro-Economic Variables in Nigeria (1970-2013)
Authors: Ezeoke Callistus Obiora, Ezeoke Nneka Angela
Abstract:
The study investigated the effect of deficit financing on macroeconomic variables in Nigeria. The specific objectives included to find out the relationship between deficit financing and GDP, interest rate, inflation rate, money supply, exchange rate and private investment respectively on a time series covering a period of 44 years (1970 – 2013). The Ordinary Least Square multiple regression produced statistics for the coefficient of determination (R2), F-test, t-test used for the interpretation of the study. The findings revealed that Deficit financing has significant positive effect on GDP and exchange rate. Again, deficit financing has a positive and insignificant relationship inflation, money supply and investment. Only interest rate recorded negative yet insignificant relationship with deficit financing. The implications of the findings are that deficit financing can be a veritable tool for boosting economic development in Nigeria, but the influential positively rising exchange rate implies that deficit financing devalues the Naira exchange rate to other currencies indicating that deficit financing can affect Nigerians competitive advantage at the world market. Thus, the study concludes that deficit financing has not encouraged economic growth in Nigeria.Keywords: deficit financing, money supply, exchange rate, inflation, GDP, investment, Nigeria
Procedia PDF Downloads 4782622 The Impact of Innovations in Human Resource Practices, Innovation Capabilities and Competitive Advantage on Company Performance
Authors: Bita Kharazi
Abstract:
The purpose of this research was to investigate the impact of innovations in human resource practices, innovation capabilities, and competitive advantage on company performance. This research was applied in terms of purpose and in terms of method, it was descriptive research of correlation type. The statistical population of this research was all the employees of Zar Industrial and Research Group. The sampling method was available in this research, and Cochran's formula was used to determine the statistical sample size. A standard questionnaire was used to collect information in this research, and SPSS software and simultaneous regression statistical tests were used to analyze the data. Based on the findings of the present research, it was found that the components of creativity in human resource practices, innovation capability, and competitive advantage have a significant impact on the company's performance.Keywords: human resource management, innovation, competitive advantage, company performance
Procedia PDF Downloads 182621 The Influence of Cognitive Load in the Acquisition of Words through Sentence or Essay Writing
Authors: Breno Barrreto Silva, Agnieszka Otwinowska, Katarzyna Kutylowska
Abstract:
Research comparing lexical learning following the writing of sentences and longer texts with keywords is limited and contradictory. One possibility is that the recursivity of writing may enhance processing and increase lexical learning; another possibility is that the higher cognitive load of complex-text writing (e.g., essays), at least when timed, may hinder the learning of words. In our study, we selected 2 sets of 10 academic keywords matched for part of speech, length (number of characters), frequency (SUBTLEXus), and concreteness, and we asked 90 L1-Polish advanced-level English majors to use the keywords when writing sentences, timed (60 minutes) or untimed essays. First, all participants wrote a timed Control essay (60 minutes) without keywords. Then different groups produced Timed essays (60 minutes; n=33), Untimed essays (n=24), or Sentences (n=33) using the two sets of glossed keywords (counterbalanced). The comparability of the participants in the three groups was ensured by matching them for proficiency in English (LexTALE), and for few measures derived from the control essay: VocD (assessing productive lexical diversity), normed errors (assessing productive accuracy), words per minute (assessing productive written fluency), and holistic scores (assessing overall quality of production). We measured lexical learning (depth and breadth) via an adapted Vocabulary Knowledge Scale (VKS) and a free association test. Cognitive load was measured in the three essays (Control, Timed, Untimed) using normed number of errors and holistic scores (TOEFL criteria). The number of errors and essay scores were obtained from two raters (interrater reliability Pearson’s r=.78-91). Generalized linear mixed models showed no difference in the breadth and depth of keyword knowledge after writing Sentences, Timed essays, and Untimed essays. The task-based measurements found that Control and Timed essays had similar holistic scores, but that Untimed essay had better quality than Timed essay. Also, Untimed essay was the most accurate, and Timed essay the most error prone. Concluding, using keywords in Timed, but not Untimed, essays increased cognitive load, leading to more errors and lower quality. Still, writing sentences and essays yielded similar lexical learning, and differences in the cognitive load between Timed and Untimed essays did not affect lexical acquisition.Keywords: learning academic words, writing essays, cognitive load, english as an L2
Procedia PDF Downloads 73