Search results for: weighted approximation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1040

Search results for: weighted approximation

260 Tractography Analysis of the Evolutionary Origin of Schizophrenia

Authors: Asmaa Tahiri, Mouktafi Amine

Abstract:

A substantial number of traditional medical research has been put forward to managing and treating mental disorders. At the present time, to our best knowledge, it is believed that fundamental understanding of the underlying causes of the majority psychological disorders needs to be explored further to inform early diagnosis, managing symptoms and treatment. The emerging field of evolutionary psychology is a promising prospect to address the origin of mental disorders, potentially leading to more effective treatments. Schizophrenia as a topical mental disorder has been linked to the evolutionary adaptation of the human brain represented in the brain connectivity and asymmetry directly linked to humans higher brain cognition in contrast to other primates being our direct living representation of the structure and connectivity of our earliest common African ancestors. As proposed in the evolutionary psychology scientific literature the pathophysiology of schizophrenia is expressed and directly linked to altered connectivity between the Hippocampal Formation (HF) and Dorsolateral Prefrontal Cortex (DLPFC). This research paper presents the results of the use of tractography analysis using multiple open access Diffusion Weighted Imaging (DWI) datasets of healthy subjects, schizophrenia-affected subjects and primates to illustrate the relevance of the aforementioned brain regions connectivity and the underlying evolutionary changes in the human brain. Deterministic fiber tracking and streamline analysis were used to generate connectivity matrices from the DWI datasets overlaid to compute distances and highlight disconnectivity patterns in conjunction with other fiber tracking metrics; Fractional Anisotropy (FA), Mean Diffusivity (MD) and Radial Diffusivity (RD).

Keywords: tractography, evolutionary psychology, schizophrenia, brain connectivity

Procedia PDF Downloads 41
259 The Impact of the Enron Scandal on the Reputation of Corporate Social Responsibility Rating Agencies

Authors: Jaballah Jamil

Abstract:

KLD (Peter Kinder, Steve Lydenberg and Amy Domini) research & analytics is an independent intermediary of social performance information that adopts an investor-pay model. KLD rating agency does not have an explicit monitoring on the rated firm which suggests that KLD ratings may not include private informations. Moreover, the incapacity of KLD to predict accurately the extra-financial rating of Enron casts doubt on the reliability of KLD ratings. Therefore, we first investigate whether KLD ratings affect investors' perception by studying the effect of KLD rating changes on firms' financial performances. Second, we study the impact of the Enron scandal on investors' perception of KLD rating changes by comparing the effect of KLD rating changes on firms' financial performances before and after the failure of Enron. We propose an empirical study that relates a number of equally-weighted portfolios returns, excess stock returns and book-to-market ratio to different dimensions of KLD social responsibility ratings. We first find that over the last two decades KLD rating changes influence significantly and negatively stock returns and book-to-market ratio of rated firms. This finding suggests that a raise in corporate social responsibility rating lowers the firm's risk. Second, to assess the Enron scandal's effect on the perception of KLD ratings, we compare the effect of KLD rating changes before and after the Enron scandal. We find that after the Enron scandal this significant effect disappears. This finding supports the view that the Enron scandal annihilates the KLD's effect on Socially Responsible Investors. Therefore, our findings may question results of recent studies that use KLD ratings as a proxy for Corporate Social Responsibility behavior.

Keywords: KLD social rating agency, investors' perception, investment decision, financial performance

Procedia PDF Downloads 420
258 Heuristics for Optimizing Power Consumption in the Smart Grid

Authors: Zaid Jamal Saeed Almahmoud

Abstract:

Our increasing reliance on electricity, with inefficient consumption trends, has resulted in several economical and environmental threats. These threats include wasting billions of dollars, draining limited resources, and elevating the impact of climate change. As a solution, the smart grid is emerging as the future power grid, with smart techniques to optimize power consumption and electricity generation. Minimizing the peak power consumption under a fixed delay requirement is a significant problem in the smart grid. In addition, matching demand to supply is a key requirement for the success of the future electricity. In this work, we consider the problem of minimizing the peak demand under appliances constraints by scheduling power jobs with uniform release dates and deadlines. As the problem is known to be NP-Hard, we propose two versions of a heuristic algorithm for solving this problem. Our theoretical analysis and experimental results show that our proposed heuristics outperform existing methods by providing a better approximation to the optimal solution. In addition, we consider dynamic pricing methods to minimize the peak load and match demand to supply in the smart grid. Our contribution is the proposal of generic, as well as customized pricing heuristics to minimize the peak demand and match demand with supply. In addition, we propose optimal pricing algorithms that can be used when the maximum deadline period of the power jobs is relatively small. Finally, we provide theoretical analysis and conduct several experiments to evaluate the performance of the proposed algorithms.

Keywords: heuristics, optimization, smart grid, peak demand, power supply

Procedia PDF Downloads 70
257 Mg and MgN₃ Cluster in Diamond: Quantum Mechanical Studies

Authors: T. S. Almutairi, Paul May, Neil Allan

Abstract:

The geometrical, electronic and magnetic properties of the neutral Mg center and MgN₃ cluster in diamond have been studied theoretically in detail by means of an HSE06 Hamiltonian that includes a fraction of the exact exchange term; this is important for a satisfactory picture of the electronic states of open-shell systems. Another batch of the calculations by GGA functionals have also been included for comparison, and these support the results from HSE06. The local perturbations in the lattice by introduced Mg defect are restricted in the first and second shell of atoms before eliminated. The formation energy calculated with HSE06 and GGA of single Mg agrees with the previous result. We found the triplet state with C₃ᵥ is the ground state of Mg center with energy lower than the singlet with C₂ᵥ by ~ 0.1 eV. The recent experimental ZPL (557.4 nm) of Mg center in diamond has been discussed in the view of present work. The analysis of the band-structure of the MgN₃ cluster confirms that the MgN₃ defect introduces a shallow donor level in the gap lying within the conduction band edge. This observation is supported by the EMM that produces n-type levels shallower than the P donor level. The formation energy of MgN₂ calculated from a 2NV defect (~ 3.6 eV) is a promising value from which to engineer MgN₃ defects inside the diamond. Ion-implantation followed by heating to about 1200-1600°C might induce migration of N related defects to the localized Mg center. Temperature control is needed for this process to restore the damage and ensure the mobilities of V and N, which demands a more precise experimental study.

Keywords: empirical marker method, generalised gradient approximation, Heyd–Scuseria–Ernzerhof screened hybrid functional, zero phono line

Procedia PDF Downloads 98
256 Development of an Implicit Physical Influence Upwind Scheme for Cell-Centered Finite Volume Method

Authors: Shidvash Vakilipour, Masoud Mohammadi, Rouzbeh Riazi, Scott Ormiston, Kimia Amiri, Sahar Barati

Abstract:

An essential component of a finite volume method (FVM) is the advection scheme that estimates values on the cell faces based on the calculated values on the nodes or cell centers. The most widely used advection schemes are upwind schemes. These schemes have been developed in FVM on different kinds of structured and unstructured grids. In this research, the physical influence scheme (PIS) is developed for a cell-centered FVM that uses an implicit coupled solver. Results are compared with the exponential differencing scheme (EDS) and the skew upwind differencing scheme (SUDS). Accuracy of these schemes is evaluated for a lid-driven cavity flow at Re = 1000, 3200, and 5000 and a backward-facing step flow at Re = 800. Simulations show considerable differences between the results of EDS scheme with benchmarks, especially for the lid-driven cavity flow at high Reynolds numbers. These differences occur due to false diffusion. Comparing SUDS and PIS schemes shows relatively close results for the backward-facing step flow and different results in lid-driven cavity flow. The poor results of SUDS in the lid-driven cavity flow can be related to its lack of sensitivity to the pressure difference between cell face and upwind points, which is critical for the prediction of such vortex dominant flows.

Keywords: cell-centered finite volume method, coupled solver, exponential differencing scheme (EDS), physical influence scheme (PIS), pressure weighted interpolation method (PWIM), skew upwind differencing scheme (SUDS)

Procedia PDF Downloads 262
255 Geostatistical Simulation of Carcinogenic Industrial Effluent on the Irrigated Soil and Groundwater, District Sheikhupura, Pakistan

Authors: Asma Shaheen, Javed Iqbal

Abstract:

The water resources are depleting due to an intrusion of industrial pollution. There are clusters of industries including leather tanning, textiles, batteries, and chemical causing contamination. These industries use bulk quantity of water and discharge it with toxic effluents. The penetration of heavy metals through irrigation from industrial effluent has toxic effect on soil and groundwater. There was strong positive significant correlation between all the heavy metals in three media of industrial effluent, soil and groundwater (P < 0.001). The metal to the metal association was supported by dendrograms using cluster analysis. The geospatial variability was assessed by using geographically weighted regression (GWR) and pollution model to identify the simulation of carcinogenic elements in soil and groundwater. The principal component analysis identified the metals source, 48.8% variation in factor 1 have significant loading for sodium (Na), calcium (Ca), magnesium (Mg), iron (Fe), chromium (Cr), nickel (Ni), lead (Pb) and zinc (Zn) of tannery effluent-based process. In soil and groundwater, the metals have significant loading in factor 1 representing more than half of the total variation with 51.3 % and 53.6 % respectively which showed that pollutants in soil and water were driven by industrial effluent. The cumulative eigen values for the three media were also found to be greater than 1 representing significant clustering of related heavy metals. The results showed that heavy metals from industrial processes are seeping up toxic trace metals in the soil and groundwater. The poisonous pollutants from heavy metals turned the fresh resources of groundwater into unusable water. The availability of fresh water for irrigation and domestic use is being alarming.

Keywords: groundwater, geostatistical, heavy metals, industrial effluent

Procedia PDF Downloads 213
254 The Therapeutic Effects of Acupuncture on Oral Dryness and Antibody Modification in Sjogren Syndrome: A Meta-Analysis

Authors: Tzu-Hao Li, Yen-Ying Kung, Chang-Youh Tsai

Abstract:

Oral dryness is a common chief complaint among patients with Sjőgren syndrome (SS), which is a disorder currently known as autoantibodies production; however, to author’s best knowledge, there has been no satisfying pharmacy to relieve the associated symptoms. Hence the effectiveness of other non-pharmacological interventions such as acupuncture should be accessed. We conducted a meta-analysis of randomized clinical trials (RCTs) which evaluated the effectiveness of xerostomia in SS. PubMed, Embase, Cochrane Central Register of Controlled Trials (CENTRAL), Chongqing Weipu Database (CQVIP), China Academic Journals Full-text Database, AiritiLibrary, Chinese Electronic Periodicals Service (CEPS), China National Knowledge Infrastructure (CNKI) Database were searches through May 12, 2018 to select studies. Data for evaluation of subjective and objective xerostomia was extracted and was assessed with random-effects meta-analysis. After searching, a total of 541 references were yielded and five RCTs were included, covering 340 patients dry mouth resulted from SS, among whom 169 patients received acupuncture and 171 patients were control group. Acupuncture group was associated with higher subjective response rate (odds ratio 3.036, 95% confidence interval [CI] 1.828 – 5.042, P < 0.001) and increased salivary flow rate (weighted mean difference [WMD] 3.066, 95% CI 2.969 – 3.164, P < 0.001), as an objective marker. In addition, two studies examined IgG levels, which were lower in the acupuncture group (WMD -166.857, 95% CI -233.138 - -100.576, P < 0.001). Therefore, in the present meta-analysis, acupuncture improves both subjective and objective markers of dry mouth with autoantibodies reduction in patients with SS and is considered as an option of non-pharmacological treatment for SS.

Keywords: acupuncture, meta-analysis, Sjogren syndrome, xerostomia

Procedia PDF Downloads 104
253 Quality of Life and Renal Biomarkers in Feline Chronic Kidney Disease

Authors: Bárbara Durão, Pedro Almeida, David Ramilo, André Meneses, Rute Canejo-Teixeira

Abstract:

The importance of quality of life (QoL) assessment in veterinary medicine is an integral part of patient care. This is especially true in cases of chronic diseases, such as chronic kidney disease (CKD), where the ever more advanced treatment options prolong the patient’s life. Whether this prolongment of life comes with an acceptable quality of life remains has been called into question. The aim of this study was to evaluate the relationship between CKD disease biomarkers and QoL in cats. Thirty-seven cats diagnosed with CKD and with no known concurrent illness were enrolled in an observational study. Through the course of several evaluations, renal biomarkers were assessed in blood and urine samples, and owners retrospectively described their cat’s quality of life using a validated instrument for this disease. Correlations between QoL scores (AWIS) and the biomarkers were assessed using Spearman’s rank test. Statistical significance was set at p-value < 0.05, and every serial sample was considered independent. Thirty-seven cats met the inclusion criteria, and all owners completed the questionnaire every time their pet was evaluated, giving a total of eighty-four questionnaires, and the average-weighted-impact-score was –0.5. Results showed there was a statistically significant correlation between the quality of life and most of 17 the studied biomarkers and confirmed that CKD has a negative impact on QoL in cats especially due to the management of the disease and secondary appetite disorders. To our knowledge, this is the attempt to assess the correlation between renal biomarkers and QoL in cats. Our results reveal a strong potential of this type of approach in clinical management, mainly in situations where it is not possible to measure biomarkers. Whilst health-related QoL is a reliable predictor of mortality and morbidity in humans; our findings can help improve the clinical practice in cats with CKD.

Keywords: chronic kidney disease, biomarkers, quality of life, feline

Procedia PDF Downloads 155
252 Magnetohydrodynamic Flow of Viscoelastic Nanofluid and Heat Transfer over a Stretching Surface with Non-Uniform Heat Source/Sink and Non-Linear Radiation

Authors: Md. S. Ansari, S. S. Motsa

Abstract:

In this paper, an analysis has been made on the flow of non-Newtonian viscoelastic nanofluid over a linearly stretching sheet under the influence of uniform magnetic field. Heat transfer characteristics is analyzed taking into the effect of nonlinear radiation and non-uniform heat source/sink. Transport equations contain the simultaneous effects of Brownian motion and thermophoretic diffusion of nanoparticles. The relevant partial differential equations are non-dimensionalized and transformed into ordinary differential equations by using appropriate similarity transformations. The transformed, highly nonlinear, ordinary differential equations are solved by spectral local linearisation method. The numerical convergence, error and stability analysis of iteration schemes are presented. The effects of different controlling parameters, namely, radiation, space and temperature-dependent heat source/sink, Brownian motion, thermophoresis, viscoelastic, Lewis number and the magnetic force parameter on the flow field, heat transfer characteristics and nanoparticles concentration are examined. The present investigation has many industrial and engineering applications in the fields of coatings and suspensions, cooling of metallic plates, oils and grease, paper production, coal water or coal–oil slurries, heat exchangers’ technology, and materials’ processing and exploiting.

Keywords: magnetic field, nonlinear radiation, non-uniform heat source/sink, similar solution, spectral local linearisation method, Rosseland diffusion approximation

Procedia PDF Downloads 353
251 An Analysis on the Appropriateness and Effectiveness of CCTV Location for Crime Prevention

Authors: Tae-Heon Moon, Sun-Young Heo, Sang-Ho Lee, Youn-Taik Leem, Kwang-Woo Nam

Abstract:

This study aims to investigate the possibility of crime prevention through CCTV by analyzing the appropriateness of the CCTV location, whether it is installed in the hotspot of crime-prone areas, and exploring the crime prevention effect and transition effect. The real crime and CCTV locations of case city were converted into the spatial data by using GIS. The data was analyzed by hotspot analysis and weighted displacement quotient(WDQ). As study methods, it analyzed existing relevant studies for identifying the trends of CCTV and crime studies based on big data from 1800 to 2014 and understanding the relation between CCTV and crime. Second, it investigated the current situation of nationwide CCTVs and analyzed the guidelines of CCTV installation and operation to draw attention to the problems and indicating points of domestic CCTV use. Third, it investigated the crime occurrence in case areas and the current situation of CCTV installation in the spatial aspects, and analyzed the appropriateness and effectiveness of CCTV installation to suggest a rational installation of CCTV and the strategic direction of crime prevention. The results demonstrate that there was no significant effect in the installation of CCTV on crime prevention. This indicates that CCTV should be installed and managed in a more scientific way reflecting local crime situations. In terms of CCTV, the methods of spatial analysis such as GIS, which can evaluate the installation effect, and the methods of economic analysis like cost-benefit analysis should be developed. In addition, these methods should be distributed to local governments across the nation for the appropriate installation of CCTV and operation. This study intended to find a design guideline of the optimum CCTV installation. In this regard, this study is meaningful in that it will contribute to the creation of a safe city.

Keywords: CCTV, safe city, crime prevention, spatial analysis

Procedia PDF Downloads 416
250 The Use of Bimodal Subtitles on Netflix English Movies in Enhancing Vocabulary

Authors: John Lloyd Angolluan, Jennile Caday, Crystal Mae Estrella, Reike Alliyah Taladua, Zion Michael Ysulat

Abstract:

One of the requirements of having the ability to communicate in English is by having adequate vocabulary. Nowadays, people are more engaged in watching movie streams on which they can watch movies in a very portable way, such as Netflix. Wherein Netflix became global demand for online media has taken off in recent years. This research aims to know whether the use of bimodal subtitles on Netflix English movies can enhance vocabulary. This study is quantitative and utilizes a descriptive method, and this study aims to explore the use of bimodal subtitles on Netflix English movies to enhance the vocabulary of students. The respondents of the study were the selected Second-year English majors of Rizal Technological University Pasig and Boni Campus using the purposive sampling technique. The researcher conducted a survey questionnaire through the use of Google Forms. In this study, the weighted mean was used to evaluate the student's responses to the statement of the problems of the study of the use of bimodal subtitles on Netflix English movies. The findings of this study revealed that the bimodal subtitle on Netflix English movies enhanced students’ vocabulary learning acquisition by providing learners with access to large amounts of real and comprehensible language input, whether accidentally or intentionally, and it turns out that bimodal subtitles on Netflix English movies help students recognize vocabulary, which has a positive impact on their vocabulary building. Therefore, the researchers advocate that watching English Netflix movies enhances students' vocabulary by using bimodal subtitled movie material during their language learning process, which may increase their motivation and the usage of bimodal subtitles in learning new vocabulary. Bimodal subtitles need to be incorporated into educational film activities to provide students with a vast amount of input to expand their vocabulary.

Keywords: bimodal subtitles, Netflix, English movies, vocabulary, subtitle, language, media

Procedia PDF Downloads 61
249 Application of Scoring Rubrics by Lecturers towards Objective Assessment of Essay Questions in the Department of Social Science Education, University of Calabar, Nigeria

Authors: Donald B. Enu, Clement O. Ukpor, Abigail E. Okon

Abstract:

Unreliable scoring of students’ performance by lecturers short-chains students’ assessment in terms of underequipping the school authority with facts as intended by society through the curriculum hence, the learners, the school and the society are cheated because the usefulness of testing is defeated. This study, therefore, examined lecturers’ scoring objectivity of essay items in the Department of Social Science Education, University of Calabar, Nigeria. Specifically, it assessed lecturers’ perception of the relevance of scoring rubrics and its level of application. Data were collected from all the 36 lecturers in the Department (28 members and 8 non-members adjourned to the department), through a 20-item questionnaire and checklist instruments. A case-study design was adopted. Descriptive statistics of frequency counts, weighted means, standard deviations, and percentages were used to analyze data gathered. A mean score of 2.5 and or 60 percent and above formed the acceptance or significant level in decision taking. It was found that lecturers perceived the use of scoring rubrics as a relevant practice to ensure fairness and reliable treatment of examiners scripts particularly in marking essay items and that there is a moderately high level of adherence to the application of scoring rubrics. It was also observed that some criteria necessary for the scoring objectivity of essay items were not fully put in place in the department. It was recommended strongly that students’ identities be hidden while marking and that pre-determined marking scheme should be prepared centrally and strictly adhered to during marking and recording of scores. Conference marking should be enforced in the department.

Keywords: essay items, objective scoring, scorers reliability, scoring rubrics

Procedia PDF Downloads 156
248 Accurate Binding Energy of Ytterbium Dimer from Ab Initio Calculations and Ultracold Photoassociation Spectroscopy

Authors: Giorgio Visentin, Alexei A. Buchachenko

Abstract:

Recent proposals to use Yb dimer as an optical clock and as a sensor for non-Newtonian gravity imply the knowledge of its interaction potential. Here, the ground-state Born-Oppenheimer Yb₂ potential energy curve is represented by a semi-analytical function, consisting of short- and long-range contributions. For the former, the systematic ab initio all-electron exact 2-component scalar-relativistic CCSD(T) calculations are carried out. Special care is taken to saturate diffuse basis set component with the atom- and bond-centered primitives and reach the complete basis set limit through n = D, T, Q sequence of the correlation-consistent polarized n-zeta basis sets. Similar approaches are used to the long-range dipole and quadrupole dispersion terms by implementing the CCSD(3) polarization propagator method for dynamic polarizabilities. Dispersion coefficients are then computed through Casimir-Polder integration. The semiclassical constraint on the number of the bound vibrational levels known for the ¹⁷⁴Yb isotope is used to scale the potential function. The scaling, based on the most accurate ab initio results, bounds the interaction energy of two Yb atoms within the narrow 734 ± 4 cm⁻¹ range, in reasonable agreement with the previous ab initio-based estimations. The resulting potentials can be used as the reference for more sophisticated models that go beyond the Born-Oppenheimer approximation and provide the means of their uncertainty estimations. The work is supported by Russian Science Foundation grant # 17-13-01466.

Keywords: ab initio coupled cluster methods, interaction potential, semi-analytical function, ytterbium dimer

Procedia PDF Downloads 131
247 Overweight and Neurocognitive Functioning: Unraveling the Antagonistic Relationship in Adolescents

Authors: Swati Bajpai, S. P. K Jena

Abstract:

Background: There is dramatic increase in the prevalence and severity of overweight in adolescents, raising concerns about their psychosocial and cognitive consequences, thereby indicating the immediate need to understand the effects of increased weight on scholastic performance. Although the body of research is currently limited, available results have identified an inverse relationship between obesity and cognition in adolescents. Aim: to examine the association between increased Body Mass Index in adolescents and their neurocognitive functioning. Methods: A case –control study of 28 subjects in the age group of 11-17 years (14 Males and 14 females) was taken on the basis of main inclusion criteria (Body Mass Index). All of them were randomized to (experimental group: overweight) and (control group: normal weighted). A complete neurocognitive assessment was carried out using validated psychological scales namely, Color Progressive Matrices (to assess intelligence); Bender Visual Motor Gestalt Test (Perceptual motor functioning); PGI-Memory Scale for Children (memory functioning) and Malin’s Intelligence Scale Indian Children (verbal and performance ability). Results: statistical analysis of the results depicted that 57% of the experimental group lack in cognitive abilities, especially in general knowledge (99.1±12.0 vs. 102.8±6.7), working memory (91.5±8.4 vs. 93.1±8.7), concrete ability (82.3±11.5 vs. 92.6±1.7) and perceptual motor functioning (1.5±1.0 vs. 0.3±0.9) as compared to control group. Conclusion: Our investigations suggest that weight gain results, at least in part, from a neurological predisposition characterized by reduced executive function, and in turn obesity itself has a compounding negative impact on the brain. Though, larger sample is needed to make more affirmative claims.

Keywords: adolescents, body mass index, neurocognition, obesity

Procedia PDF Downloads 470
246 Design of Microwave Building Block by Using Numerical Search Algorithm

Authors: Haifeng Zhou, Tsungyang Liow, Xiaoguang Tu, Eujin Lim, Chao Li, Junfeng Song, Xianshu Luo, Ying Huang, Lianxi Jia, Lianwee Luo, Qing Fang, Mingbin Yu, Guoqiang Lo

Abstract:

With the development of technology, countries gradually allocated more and more frequency spectrums for civilization and commercial usage, especially those high radio frequency bands indicating high information capacity. The field effect becomes more and more prominent in microwave components as frequency increases, which invalidates the transmission line theory and complicate the design of microwave components. Here a modeling approach based on numerical search algorithm is proposed to design various building blocks for microwave circuits to avoid complicated impedance matching and equivalent electrical circuit approximation. Concretely, a microwave component is discretized to a set of segments along the microwave propagation path. Each of the segment is initialized with random dimensions, which constructs a multiple-dimension parameter space. Then numerical searching algorithms (e.g. Pattern search algorithm) are used to find out the ideal geometrical parameters. The optimal parameter set is achieved by evaluating the fitness of S parameters after a number of iterations. We had adopted this approach in our current projects and designed many microwave components including sharp bends, T-branches, Y-branches, microstrip-to-stripline converters and etc. For example, a stripline 90° bend was designed in 2.54 mm x 2.54 mm space for dual-band operation (Ka band and Ku band) with < 0.18 dB insertion loss and < -55 dB reflection. We expect that this approach can enrich the tool kits for microwave designers.

Keywords: microwave component, microstrip and stripline, bend, power division, the numerical search algorithm.

Procedia PDF Downloads 359
245 Utility of Geospatial Techniques in Delineating Groundwater-Dependent Ecosystems in Arid Environments

Authors: Mangana B. Rampheri, Timothy Dube, Farai Dondofema, Tatenda Dalu

Abstract:

Identifying and delineating groundwater-dependent ecosystems (GDEs) is critical to the well understanding of the GDEs spatial distribution as well as groundwater allocation. However, this information is inadequately understood due to limited available data for the most area of concerns. Thus, this study aims to address this gap using remotely sensed, analytical hierarchy process (AHP) and in-situ data to identify and delineate GDEs in Khakea-Bray Transboundary Aquifer. Our study developed GDEs index, which integrates seven explanatory variables, namely, Normalized Difference Vegetation Index (NDVI), Modified Normalized Difference Water Index (MNDWI), Land-use and landcover (LULC), slope, Topographic Wetness Index (TWI), flow accumulation and curvature. The GDEs map was delineated using the weighted overlay tool in ArcGIS environments. The map was spatially classified into two classes, namely, GDEs and Non-GDEs. The results showed that only 1,34 % (721,91 km2) of the area is characterised by GDEs. Finally, groundwater level (GWL) data was used for validation through correlation analysis. Our results indicated that: 1) GDEs are concentrated at the northern, central, and south-western part of our study area, and 2) the validation results showed that GDEs classes do not overlap with GWL located in the 22 boreholes found in the given area. However, the results show a possible delineation of GDEs in the study area using remote sensing and GIS techniques along with AHP. The results of this study further contribute to identifying and delineating priority areas where appropriate water conservation programs, as well as strategies for sustainable groundwater development, can be implemented.

Keywords: analytical hierarchy process (AHP), explanatory variables, groundwater-dependent ecosystems (GDEs), khakea-bray transboundary aquifer, sentinel-2

Procedia PDF Downloads 89
244 Influenza Vaccine Uptake Among Tunisian Physicians in the 2018-2019 Influenza Season

Authors: Ines Cherif, Ghassen Kharroubi, Leila Bouabid, Adel Gharbi, Aicha Boukthir, Margaret Mccarron, Nissaf Ben Alaya, Afif Ben Salah, Jihene Bettaieb

Abstract:

Healthcare workers' flu vaccination prevents influenza disease among both patients and caregivers. We aimed in this study to assess influenza vaccine (IV) coverage in 2018-2019 among Tunisian physicians and to determine factors associated with IV receipt. A cross sectional study was carried out in Tunisian primary and secondary health care facilities in the 2018-2019 influenza season. Physicians with direct patient contact were recruited according to a self-weighted multistage sampling. Data were collected through a face to face questionnaire containing questions on knowledge, attitudes, and practices regarding IV. Bivariate analysis was used in order to determine factors associated with IV receipt. A total of 167 physicians were included in the study with a mean age of 48.2 ± 7.7 years and a sex-ratio (M: F) of 0.37. Among participants, 15.1% (95% CI: [9.7%-20.3%]) were vaccinated against influenza in the 2018-2019 influenza season. Bivariate analysis revealed that previous flu immunization in the four years preceding the 2018-2019 influenza season (OR=32.3; p < 10-3), belief that vaccinating healthcare workers may reduce work absenteeism (OR=4.7, p=0.028), belief that flu vaccine should be mandatory to healthcare workers (OR=3.3, p=0.01) and high confidence towards IV efficacy in preventing influenza among caregivers (OR= 4.5, p=0.01) were associated with a higher IV receipt in 2018-2019 among physicians. Less than one fifth of Tunisian physicians were vaccinated against influenza in 2018-2019. Higher vaccine uptake was related to a higher belief in vaccine efficacy in preventing influenza disease among both patients and caregivers. This underscores the need for periodic educational campaigns to raise physicians' awareness about IV efficacy. The switch to an IV mandatory policy should also be considered.

Keywords: influenza vaccine, physicians, Tunisia, vaccination uptake

Procedia PDF Downloads 116
243 Extreme Value Theory Applied in Reliability Analysis: Case Study of Diesel Generator Fans

Authors: Jelena Vucicevic

Abstract:

Reliability analysis represents a very important task in different areas of work. In any industry, this is crucial for maintenance, efficiency, safety and monetary costs. There are ways to calculate reliability, unreliability, failure density and failure rate. In this paper, the results for the reliability of diesel generator fans were calculated through Extreme Value Theory. The Extreme Value Theory is not widely used in the engineering field. Its usage is well known in other areas such as hydrology, meteorology, finance. The significance of this theory is in the fact that unlike the other statistical methods it is focused on rare and extreme values, and not on average. It should be noted that this theory is not designed exclusively for extreme events, but for extreme values in any event. Therefore, this is a great opportunity to apply the theory and test if it could be applied in this situation. The significance of the work is the calculation of time to failure or reliability in a new way, using statistic. Another advantage of this calculation is that there is no need for technical details and it can be implemented in any part for which we need to know the time to fail in order to have appropriate maintenance, but also to maximize usage and minimize costs. In this case, calculations have been made on diesel generator fans but the same principle can be applied to any other part. The data for this paper came from a field engineering study of the time to failure of diesel generator fans. The ultimate goal was to decide whether or not to replace the working fans with a higher quality fan to prevent future failures. The results achieved in this method will show the approximation of time for which the fans will work as they should, and the percentage of probability of fans working more than certain estimated time. Extreme Value Theory can be applied not only for rare and extreme events, but for any event that has values which we can consider as extreme.

Keywords: extreme value theory, lifetime, reliability analysis, statistic, time to failure

Procedia PDF Downloads 310
242 Analytic Hierarchy Process for the Container Terminal Choice from Multiple Terminals within the Port of Colombo

Authors: G. M. B. P. Abeysekara, W. A. D. C. Wijerathna

Abstract:

Terminal choice from the multiple terminals region is not a simple decision and it is very complex, because shipping lines should consider on influential factors for the terminal choice at once according to their requirement. Therefore, terminal choice is a multiple criterion decision making (MCDM) situation under a specially designed decision hierarchy. Identification of perspective of shipping lines regarding terminal choice is vital important for the decision makers regarding container terminals. Thus this study is evaluated perception on main and feeder shipping lines’ regarding port of Colombo container terminals, and ranked terminals according to shipping lines preference. Analytic Hierarchy Process (AHP) model is adapted to this study, since it has features similar to the MCDM, it is weighted every influential factor by using pair wise comparisons, and consistency of the decision makers’ judgments are checked to evaluate trustworthiness of gathered data. And rating method is used to rank the terminals within Port of Colombo by assigning particular preference values with respect to the criteria and sub criteria. According to the findings of this study, main lines’ mainly concern on water depth of approach channel, depth of berth, handling charges and handling equipment facilities. And feeder lines’ main concerns were handling equipment facilities, loading and discharging efficiency, depth of berth and handling charges. Findings of the study suggested concentrating regarding the emphasized areas in order to enhance the competitiveness of terminals, and to increase number of vessel callings at the Port of Colombo. Application of above finding of the terminals within Port of Colombo lead to a far better competition among terminals and would uplift the overall level of services.

Keywords: AHP, Main and feeder shipping lines, criteria, sub criteria

Procedia PDF Downloads 403
241 Simulation and Synoptic Investigation of a Severe Dust Storm in Urmia Lake in the Middle East

Authors: Nasim Hossein Hamzeh, Karim Shukurov, Abbas Ranjbar Saadat Abadi, Alaa Mhawish, Christian Opp

Abstract:

Deserts are the main dust sources in the world. Also, recently driedLake beds have caused environmental problems inthe surrounding areas in the world. In this study, the Urmia Lake was the source of dustfromApril 24 to April 25, 2017.The local dust storm was combined with another large-scale dust storm that originated from Saudi Arabia and Iraq 1-2 days earlier. Synoptic investigation revealed that the severe dust storm was made by a strong Black Sea cyclone and a low-pressure system over the Middle East and Central Iraq in conjunction a high-pressure system and associated with a high gradient contour and a quasi-stationary long-wave trough over the east and south of the Mediterranean Sea. Based on HYSPLIT 72 hours backward and forward trajectories, the most probable dust transport routes to and from the Urmia Lake region are estimated. Using the concentration weighted trajectory (CWT) method based on 24 hours backward and 24 hours forward trajectories, the spatial distributions of potential sources of PM10 observed in the Urmia Lake region on April 23-26, 2017. Also, the vertical profile of dust particles using the WRF-Chem model with two dust schemes showed dust ascending up to 5 km from the lake. Also, the dust schemes outputs shows that the PM10 fluctuating changes are 12 hours earlier than the measured surface PM10 at five air pollution monitoring stations around the Urmia Lake in 23-26 April 2017.

Keywords: dust storm, synoptic investigation, WRF-chem model, urmia lake, lagrangian trajectory

Procedia PDF Downloads 191
240 Development and Acceptance of a Proposed Module for Enhancing the Reading and Writing Skills in Baybayin: The Traditional Writing System in the Philippines

Authors: Maria Venus G. Solares

Abstract:

The ancient Filipinos had their own spelling or alphabet that differed from the modern Roman alphabet brought by the Spaniards. It consists of seventeen letters, three vowels, and fourteen consonants and is called Baybayin. The word Baybayin is a Tagalog word that refers to all the letters used in writing a language, an alphabet; however, it is also a syllable. The House Bill 4395, first proposed by Rep. Leopoldo Bataoil of the second district of Pangasinan in 2011, which later became House Bill 1022 of what he called The Declaration of the Baybayin as the National Writing System of the Philippines, prompted the researcher to conduct a study on the topic. The main objective of this study was to develop and assess the proposed module for enhancing the reading and writing skills in Baybayin of the students. The researchers wanted to ensure the acceptability of the Baybayin using the proposed module and meet the needs of students in developing their ability to read and write Baybayin through the module. The researchers used quasi-experimental research in this study. The data was collected through the initial and final analysis of the students of Adamson University's ABM 1102 using convenient sampling techniques. Based on statistical analysis of data using weighted mean, standard deviation, and paired t-tests, the proposed module helped improve the students' literacy skills, and the response exercises in the proposed module changed the acceptability of the Baybayin in their minds. The study showed that there was an important difference in the scores of students before and after the use of the module. The student's response to the assessment of their reading and writing skills on Baybayin was highly acceptable. This study will help develop the reading and writing skills of the students in Baybayin and teach Baybayin in response to the revival of a part of Philippine culture that has been long forgotten.

Keywords: Baybayin, proposed module, skill, acceptability

Procedia PDF Downloads 96
239 MRCP as a Pre-Operative Tool for Predicting Variant Biliary Anatomy in Living Related Liver Donors

Authors: Awais Ahmed, Atif Rana, Haseeb Zia, Maham Jahangir, Rashed Nazir, Faisal Dar

Abstract:

Purpose: Biliary complications represent the most common cause of morbidity in living related liver donor transplantation and detailed preoperative evaluation of biliary anatomic variants is crucial for safe patient selection and improved surgical outcomes. Purpose of this study is to determine the accuracy of preoperative MRCP in predicting biliary variations when compared to intraoperative cholangiography in living related liver donors. Materials and Methods: From 44 potential donors, 40 consecutive living related liver donors (13 females and 28 males) underwent donor hepatectomy at our centre from April 2012 to August 2013. MRCP and IOC of all patients were retrospectively reviewed separately by two radiologists and a transplant surgeon.MRCP was performed on 1.5 Tesla MR magnets using breath-hold heavily T2 weighted radial slab technique. One patient was excluded due to suboptimal MRCP. The accuracy of MRCP for variant biliary anatomy was calculated. Results: MRCP accurately predicted the biliary anatomy in 38 of 39 cases (97 %). Standard biliary anatomy was predicted by MRCP in 25 (64 %) donors (100% sensitivity). Variant biliary anatomy was noted in 14 (36 %) IOCs of which MRCP predicted precise anatomy of 13 variants (93 % sensitivity). The two most common variations were drainage of the RPSD into the LHD (50%) and the triple confluence of the RASD, RPSD and LHD (21%). Conclusion: MRCP is a sensitive imaging tool for precise pre-operative mapping of biliary variations which is critical to surgical decision making in living related liver transplantation.

Keywords: intraoperative cholangiogram, liver transplantation, living related donors, magnetic resonance cholangio-pancreaticogram (MRCP)

Procedia PDF Downloads 369
238 Numerical Calculation and Analysis of Fine Echo Characteristics of Underwater Hemispherical Cylindrical Shell

Authors: Hongjian Jia

Abstract:

A finite-length cylindrical shell with a spherical cap is a typical engineering approximation model of actual underwater targets. The research on the omni-directional acoustic scattering characteristics of this target model can provide a favorable basis for the detection and identification of actual underwater targets. The elastic resonance characteristics of the target are the results of the comprehensive effect of the target length, shell-thickness ratio and materials. Under the conditions of different materials and geometric dimensions, the coincidence resonance characteristics of the target have obvious differences. Aiming at this problem, this paper obtains the omni-directional acoustic scattering field of the underwater hemispherical cylindrical shell by numerical calculation and studies the influence of target geometric parameters (length, shell-thickness ratio) and material parameters on the coincidence resonance characteristics of the target in turn. The study found that the formant interval is not a stable value and changes with the incident angle. Among them, the formant interval is less affected by the target length and shell-thickness ratio and is significantly affected by the material properties, which is an effective feature for classifying and identifying targets of different materials. The quadratic polynomial is utilized to fully fit the change relationship between the formant interval and the angle. The results show that the three fitting coefficients of the stainless steel and aluminum targets are significantly different, which can be used as an effective feature parameter to characterize the target materials.

Keywords: hemispherical cylindrical shell;, fine echo characteristics;, geometric and material parameters;, formant interval

Procedia PDF Downloads 81
237 Factor Influencing Pharmacist Engagement and Turnover Intention in Thai Community Pharmacist: A Structural Equation Modelling Approach

Authors: T. Nakpun, T. Kanjanarach, T. Kittisopee

Abstract:

Turnover of community pharmacist can affect continuity of patient care and most importantly the quality of care and also the costs of a pharmacy. It was hypothesized that organizational resources, job characteristics, and social supports had direct effect on pharmacist turnover intention, and indirect effect on pharmacist turnover intention via pharmacist engagement. This research aimed to study influencing factors on pharmacist engagement and pharmacist turnover intention by testing the proposed structural hypothesized model to explain the relationship among organizational resources, job characteristics, and social supports that effect on pharmacist turnover intention and pharmacist engagement in Thai community pharmacists. A cross sectional study design with self-administered questionnaire was conducted in 209 Thai community pharmacists. Data were analyzed using Structural Equation Modeling technique with analysis of a moment structures AMOS program. The final model showed that only organizational resources had significant negative direct effect on pharmacist turnover intention (β =-0.45). Job characteristics and social supports had significant positive relationship with pharmacist engagement (β = 0.44, and 0.55 respectively). Pharmacist engagement had significant negative relationship with pharmacist turnover intention (β = - 0.24). Thus, job characteristics and social supports had significant negative indirect effect on turnover intention via pharmacist engagement (β =-0.11 and -0.13, respectively). The model fit the data well (χ2/ degree of freedom (DF) = 2.12, the goodness of fit index (GFI)=0.89, comparative fit index (CFI) = 0.94 and root mean square error of approximation (RMSEA) = 0.07). This study can be concluded that organizational resources were the most important factor because it had direct effect on pharmacist turnover intention. Job characteristics and social supports were also help decrease pharmacist turnover intention via pharmacist engagement.

Keywords: community pharmacist, influencing factor, turnover intention, work engagement

Procedia PDF Downloads 176
236 Confidence Intervals for Process Capability Indices for Autocorrelated Data

Authors: Jane A. Luke

Abstract:

Persistent pressure passed on to manufacturers from escalating consumer expectations and the ever growing global competitiveness have produced a rapidly increasing interest in the development of various manufacturing strategy models. Academic and industrial circles are taking keen interest in the field of manufacturing strategy. Many manufacturing strategies are currently centered on the traditional concepts of focused manufacturing capabilities such as quality, cost, dependability and innovation. Process capability indices was conducted assuming that the process under study is in statistical control and independent observations are generated over time. However, in practice, it is very common to come across processes which, due to their inherent natures, generate autocorrelated observations. The degree of autocorrelation affects the behavior of patterns on control charts. Even, small levels of autocorrelation between successive observations can have considerable effects on the statistical properties of conventional control charts. When observations are autocorrelated the classical control charts exhibit nonrandom patterns and lack of control. Many authors have considered the effect of autocorrelation on the performance of statistical process control charts. In this paper, the effect of autocorrelation on confidence intervals for different PCIs was included. Stationary Gaussian processes is explained. Effect of autocorrelation on PCIs is described in detail. Confidence intervals for Cp and Cpk are constructed for PCIs when data are both independent and autocorrelated. Confidence intervals for Cp and Cpk are computed. Approximate lower confidence limits for various Cpk are computed assuming AR(1) model for the data. Simulation studies and industrial examples are considered to demonstrate the results.

Keywords: autocorrelation, AR(1) model, Bissell’s approximation, confidence intervals, statistical process control, specification limits, stationary Gaussian processes

Procedia PDF Downloads 364
235 Recurrent Neural Networks for Complex Survival Models

Authors: Pius Marthin, Nihal Ata Tutkun

Abstract:

Survival analysis has become one of the paramount procedures in the modeling of time-to-event data. When we encounter complex survival problems, the traditional approach remains limited in accounting for the complex correlational structure between the covariates and the outcome due to the strong assumptions that limit the inference and prediction ability of the resulting models. Several studies exist on the deep learning approach to survival modeling; moreover, the application for the case of complex survival problems still needs to be improved. In addition, the existing models need to address the data structure's complexity fully and are subject to noise and redundant information. In this study, we design a deep learning technique (CmpXRnnSurv_AE) that obliterates the limitations imposed by traditional approaches and addresses the above issues to jointly predict the risk-specific probabilities and survival function for recurrent events with competing risks. We introduce the component termed Risks Information Weights (RIW) as an attention mechanism to compute the weighted cumulative incidence function (WCIF) and an external auto-encoder (ExternalAE) as a feature selector to extract complex characteristics among the set of covariates responsible for the cause-specific events. We train our model using synthetic and real data sets and employ the appropriate metrics for complex survival models for evaluation. As benchmarks, we selected both traditional and machine learning models and our model demonstrates better performance across all datasets.

Keywords: cumulative incidence function (CIF), risk information weight (RIW), autoencoders (AE), survival analysis, recurrent events with competing risks, recurrent neural networks (RNN), long short-term memory (LSTM), self-attention, multilayers perceptrons (MLPs)

Procedia PDF Downloads 68
234 Demand Forecasting to Reduce Dead Stock and Loss Sales: A Case Study of the Wholesale Electric Equipment and Part Company

Authors: Korpapa Srisamai, Pawee Siriruk

Abstract:

The purpose of this study is to forecast product demands and develop appropriate and adequate procurement plans to meet customer needs and reduce costs. When the product exceeds customer demands or does not move, it requires the company to support insufficient storage spaces. Moreover, some items, when stored for a long period of time, cause deterioration to dead stock. A case study of the wholesale company of electronic equipment and components, which has uncertain customer demands, is considered. The actual purchasing orders of customers are not equal to the forecast provided by the customers. In some cases, customers have higher product demands, resulting in the product being insufficient to meet the customer's needs. However, some customers have lower demands for products than estimates, causing insufficient storage spaces and dead stock. This study aims to reduce the loss of sales opportunities and the number of remaining goods in the warehouse, citing 30 product samples of the company's most popular products. The data were collected during the duration of the study from January to October 2022. The methods used to forecast are simple moving averages, weighted moving average, and exponential smoothing methods. The economic ordering quantity and reorder point are used to calculate to meet customer needs and track results. The research results are very beneficial to the company. The company can reduce the loss of sales opportunities by 20% so that the company has enough products to meet customer needs and can reduce unused products by up to 10% dead stock. This enables the company to order products more accurately, increasing profits and storage space.

Keywords: demand forecast, reorder point, lost sale, dead stock

Procedia PDF Downloads 90
233 Multi-Objective Multi-Period Allocation of Temporary Earthquake Disaster Response Facilities with Multi-Commodities

Authors: Abolghasem Yousefi-Babadi, Ali Bozorgi-Amiri, Aida Kazempour, Reza Tavakkoli-Moghaddam, Maryam Irani

Abstract:

All over the world, natural disasters (e.g., earthquakes, floods, volcanoes and hurricanes) causes a lot of deaths. Earthquakes are introduced as catastrophic events, which is accident by unusual phenomena leading to much loss around the world. Such could be replaced by disasters or any other synonyms strongly demand great long-term help and relief, which can be hard to be managed. Supplies and facilities are very important challenges after any earthquake which should be prepared for the disaster regions to satisfy the people's demands who are suffering from earthquake. This paper proposed disaster response facility allocation problem for disaster relief operations as a mathematical programming model. Not only damaged people in the earthquake victims, need the consumable commodities (e.g., food and water), but also they need non-consumable commodities (e.g., clothes) to protect themselves. Therefore, it is concluded that paying attention to disaster points and people's demands are very necessary. To deal with this objective, both commodities including consumable and need non-consumable commodities are considered in the presented model. This paper presented the multi-objective multi-period mathematical programming model regarding the minimizing the average of the weighted response times and minimizing the total operational cost and penalty costs of unmet demand and unused commodities simultaneously. Furthermore, a Chebycheff multi-objective solution procedure as a powerful solution algorithm is applied to solve the proposed model. Finally, to illustrate the model applicability, a case study of the Tehran earthquake is studied, also to show model validation a sensitivity analysis is carried out.

Keywords: facility location, multi-objective model, disaster response, commodity

Procedia PDF Downloads 236
232 Fast Bayesian Inference of Multivariate Block-Nearest Neighbor Gaussian Process (NNGP) Models for Large Data

Authors: Carlos Gonzales, Zaida Quiroz, Marcos Prates

Abstract:

Several spatial variables collected at the same location that share a common spatial distribution can be modeled simultaneously through a multivariate geostatistical model that takes into account the correlation between these variables and the spatial autocorrelation. The main goal of this model is to perform spatial prediction of these variables in the region of study. Here we focus on a geostatistical multivariate formulation that relies on sharing common spatial random effect terms. In particular, the first response variable can be modeled by a mean that incorporates a shared random spatial effect, while the other response variables depend on this shared spatial term, in addition to specific random spatial effects. Each spatial random effect is defined through a Gaussian process with a valid covariance function, but in order to improve the computational efficiency when the data are large, each Gaussian process is approximated to a Gaussian random Markov field (GRMF), specifically to the block nearest neighbor Gaussian process (Block-NNGP). This approach involves dividing the spatial domain into several dependent blocks under certain constraints, where the cross blocks allow capturing the spatial dependence on a large scale, while each individual block captures the spatial dependence on a smaller scale. The multivariate geostatistical model belongs to the class of Latent Gaussian Models; thus, to achieve fast Bayesian inference, it is used the integrated nested Laplace approximation (INLA) method. The good performance of the proposed model is shown through simulations and applications for massive data.

Keywords: Block-NNGP, geostatistics, gaussian process, GRMF, INLA, multivariate models.

Procedia PDF Downloads 71
231 Geographic Information System Based Development Potentiality Assessment for Rural Villages: Case Study in Fuliang County, Jingdezhen

Authors: Sishen Wang

Abstract:

Chinese rural industry development is the major task currently during rapid urbanization. Development of potentiality assessment, evaluate the overall suitability of each village for further industrial development, could offer reference for policy makers, especially considering the limited data available in Chinese rural regions. The study focuses on 157 official villages in Fuliang County and evaluates their development potentiality by their topography, transportation condition, population, income of villagers, infrastructure and environmental conditions. Land cover changes for Fuliang county and surrounding areas of each village is also investigated for reference. The final development potentiality of each village was calculated by adding different weighted scores of different categories. Besides, inverse distance weighting (IDW) images for both final score of development potentiality and each factor were made and compared to help to understand the final result. The study found that village in the southern and northern regions have higher development potentiality than villages in the eastern and western regions, mainly because of higher income of villagers, good accessibilities and a large amount of population size. In addition, the Fuliang county was divided into five regions based on final result and policy reference for the development of each region were put forward individually. In addition, three suggestions were made for better local development potentiality: Firstly, the transportation accessibility needs to be improved in the northern regions by building more public transit system there. Secondly, the environmental conditions and infrastructure conditions in the eastern region of the county need some improvement. Thirdly, some encouragement and job opportunities should beset up in the western regions to attract labor force to move in and settle down.

Keywords: development potentiality, Fuliang GIS-Based, GIS, official village

Procedia PDF Downloads 96