Search results for: least square estimates
1986 A Systematic Review on Development of a Cost Estimation Framework: A Case Study of Nigeria
Authors: Babatunde Dosumu, Obuks Ejohwomu, Akilu Yunusa-Kaltungo
Abstract:
Cost estimation in construction is often difficult, particularly when dealing with risks and uncertainties, which are inevitable and peculiar to developing countries like Nigeria. Direct consequences of these are major deviations in cost, duration, and quality. The fundamental aim of this study is to develop a framework for assessing the impacts of risk on cost estimation, which in turn causes variabilities between contract sum and final account. This is very important, as initial estimates given to clients should reflect the certain magnitude of consistency and accuracy, which the client builds other planning-related activities upon, and also enhance the capabilities of construction industry professionals by enabling better prediction of the final account from the contract sum. In achieving this, a systematic literature review was conducted with cost variability and construction projects as search string within three databases: Scopus, Web of science, and Ebsco (Business source premium), which are further analyzed and gap(s) in knowledge or research discovered. From the extensive review, it was found that factors causing deviation between final accounts and contract sum ranged between 1 and 45. Besides, it was discovered that a cost estimation framework similar to Building Cost Information Services (BCIS) is unavailable in Nigeria, which is a major reason why initial estimates are very often inconsistent, leading to project delay, abandonment, or determination at the expense of the huge sum of money invested. It was concluded that the development of a cost estimation framework that is adjudged an important tool in risk shedding rather than risk-sharing in project risk management would be a panacea to cost estimation problems, leading to cost variability in the Nigerian construction industry by the time this ongoing Ph.D. research is completed. It was recommended that practitioners in the construction industry should always take into account risk in order to facilitate the rapid development of the construction industry in Nigeria, which should give stakeholders a more in-depth understanding of the estimation effectiveness and efficiency to be adopted by stakeholders in both the private and public sectors.Keywords: cost variability, construction projects, future studies, Nigeria
Procedia PDF Downloads 2091985 A Review of the Relation between Thermofludic Properties of the Fluid in Micro Channel Based Cooling Solutions and the Shape of Microchannel
Authors: Gurjit Singh, Gurmail Singh
Abstract:
The shape of microchannels in microchannel heat sinks can have a significant impact on both heat transfer and fluid flow properties. Heat Transfer, pressure drop, and Some effects of microchannel shape on these properties. The shape of microchannels can affect the heat transfer performance of microchannel heat sinks. Channels with rectangular or square cross-sections typically have higher heat transfer coefficients compared to circular channels. This is because rectangular or square channels have a larger wetted perimeter per unit cross-sectional area, which enhances the heat transfer from the fluid to the channel walls. The shape of microchannels can also affect the pressure drop across the heat sink. Channels with a rectangular cross-section usually have higher pressure drop than circular channels. This is because the corners of rectangular channels create additional flow resistance, which leads to a higher pressure drop. Overall, the shape of microchannels in microchannel heat sinks can have a significant impact on the heat transfer and fluid flow properties of the heat sink. The optimal shape of microchannels depends on the specific application and the desired balance between heat transfer performance and pressure drop.Keywords: heat transfer, microchannel heat sink, pressure drop, chape of microchannel
Procedia PDF Downloads 901984 Generalization of Tau Approximant and Error Estimate of Integral Form of Tau Methods for Some Class of Ordinary Differential Equations
Authors: A. I. Ma’ali, R. B. Adeniyi, A. Y. Badeggi, U. Mohammed
Abstract:
An error estimation of the integrated formulation of the Lanczos tau method for some class of ordinary differential equations was reported. This paper is concern with the generalization of tau approximants and their corresponding error estimates for some class of ordinary differential equations (ODEs) characterized by m + s =3 (i.e for m =1, s=2; m=2, s=1; and m=3, s=0) where m and s are the order of differential equations and number of overdetermination, respectively. The general result obtained were validated with some numerical examples.Keywords: approximant, error estimate, tau method, overdetermination
Procedia PDF Downloads 6061983 Optimizing Stormwater Sampling Design for Estimation of Pollutant Loads
Authors: Raja Umer Sajjad, Chang Hee Lee
Abstract:
Stormwater runoff is the leading contributor to pollution of receiving waters. In response, an efficient stormwater monitoring program is required to quantify and eventually reduce stormwater pollution. The overall goals of stormwater monitoring programs primarily include the identification of high-risk dischargers and the development of total maximum daily loads (TMDLs). The challenge in developing better monitoring program is to reduce the variability in flux estimates due to sampling errors; however, the success of monitoring program mainly depends on the accuracy of the estimates. Apart from sampling errors, manpower and budgetary constraints also influence the quality of the estimates. This study attempted to develop optimum stormwater monitoring design considering both cost and the quality of the estimated pollutants flux. Three years stormwater monitoring data (2012 – 2014) from a mix land use located within Geumhak watershed South Korea was evaluated. The regional climate is humid and precipitation is usually well distributed through the year. The investigation of a large number of water quality parameters is time-consuming and resource intensive. In order to identify a suite of easy-to-measure parameters to act as a surrogate, Principal Component Analysis (PCA) was applied. Means, standard deviations, coefficient of variation (CV) and other simple statistics were performed using multivariate statistical analysis software SPSS 22.0. The implication of sampling time on monitoring results, number of samples required during the storm event and impact of seasonal first flush were also identified. Based on the observations derived from the PCA biplot and the correlation matrix, total suspended solids (TSS) was identified as a potential surrogate for turbidity, total phosphorus and for heavy metals like lead, chromium, and copper whereas, Chemical Oxygen Demand (COD) was identified as surrogate for organic matter. The CV among different monitored water quality parameters were found higher (ranged from 3.8 to 15.5). It suggests that use of grab sampling design to estimate the mass emission rates in the study area can lead to errors due to large variability. TSS discharge load calculation error was found only 2 % with two different sample size approaches; i.e. 17 samples per storm event and equally distributed 6 samples per storm event. Both seasonal first flush and event first flush phenomena for most water quality parameters were observed in the study area. Samples taken at the initial stage of storm event generally overestimate the mass emissions; however, it was found that collecting a grab sample after initial hour of storm event more closely approximates the mean concentration of the event. It was concluded that site and regional climate specific interventions can be made to optimize the stormwater monitoring program in order to make it more effective and economical.Keywords: first flush, pollutant load, stormwater monitoring, surrogate parameters
Procedia PDF Downloads 2401982 Epidemiology of Toxoplasma gondii Infection in Animals of the Arabian Peninsula: A Systematic Review and Meta-Analysis
Authors: Ebtisam A. Al-Mslemani, Khalid A. Enan, Asmaa Abdelgadier, Nada Assaad, Zaynab Elhussein, Khalid Eltom
Abstract:
Background: Toxoplasma gondii (T. gondii) is a zoonotic parasite that can be transmitted from animals to humans, with felids acting as its definitive host. Thus, understanding the epidemiology of this parasite in animal populations is vital to controlling its transmission to humans as well as to other animal groups. Objectives: This systematic review and meta-analysis aim to summarise and analyse reports of T. gondii infection in animal species residing in the Arabian Peninsula. Methods: It was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA), with relevant studies being retrieved from MEDLINE/PubMed, Scopus, Cochrane Library, Google Scholar and ScienceDirect. All articles published in Arabic or English languages between January 2000 and December 2020 were screened for eligibility. The random effects model was used to calculate the pooled prevalence of T. gondii infection in different animal populations which were found to harbour this infection. The critical appraisal tool for prevalence studies designed by the Joanna Briggs Institute (JBI) was used to assess the risk of bias in all included studies. Results: A total of 15 studies were retrieved, reporting prevalence estimates from 4 countries in this region and in 13 animal species. A quantitative meta-analysis estimated a pooled prevalence of 43% in felids [95% confidence interval (CI) = 23-64%, I2 index = 100%], 48% in sheep (95% CI = 27-70%, I2 = 99%) and 21% in camels (95% CI = 7-35%, I2 = 99%). Evidence of possible publication bias was found in both felids and sheep. Conclusions: This meta-analysis estimates a high prevalence of T. gondii infection in animal species that are of high economic and cultural importance to countries of this region. Hence, these findings provide valuable insight to public health authorities as well as economic and animal resources advisors in countries of the Arabian Peninsula.Keywords: Arabian Peninsula, toxoplasma gondii, animals; meta-analysis, toxoplasmosis
Procedia PDF Downloads 821981 Investigating the Atmospheric Phase Distribution of Inorganic Reactive Nitrogen Species along the Urban Transect of Indo Gangetic Plains
Authors: Reema Tiwari, U. C. Kulshrestha
Abstract:
As a key regulator of atmospheric oxidative capacity and secondary aerosol formations, the signatures of reactive nitrogen (Nr) emissions are becoming increasingly evident in the cascade of air pollution, acidification, and eutrophication of the ecosystem. However, their accurate estimates in N budget remains limited by the photochemical conversion processes where occurrence of differential atmospheric residence time of gaseous (NOₓ, HNO₃, NH₃) and particulate (NO₃⁻, NH₄⁺) Nr species becomes imperative to their spatio temporal evolution on a synoptic scale. The present study attempts to quantify such interactions under tropical conditions when low anticyclonic winds become favorable to the advections from west during winters. For this purpose, a diurnal sampling was conducted using low volume sampler assembly where ambient concentrations of Nr trace gases along with their ionic fractions in the aerosol samples were determined with UV-spectrophotometer and ion chromatography respectively. The results showed a spatial gradient of the gaseous precursors with a much pronounced inter site variability (p < 0.05) than their particulate fractions. Such observations were confirmed for their limited photochemical conversions where less than 1 ratios of day and night measurements (D/N) for the different Nr fractions suggested an influence of boundary layer dynamics at the background site. These phase conversion processes were further corroborated with the molar ratios of NOₓ/NOᵧ and NH₃/NHₓ where incomplete titrations of NOₓ and NH₃ emissions were observed irrespective of their diurnal phases along the sampling transect. Their calculations with equilibrium based approaches for an NH₃-HNO₃-NH₄NO₃ system, on the other hand, were characterized by delays in equilibrium attainment where plots of their below deliquescence Kₘ and Kₚ values with 1000/T confirmed the role of lower temperature ranges in NH₄NO₃ aerosol formation. These results would help us in not only resolving the changing atmospheric inputs of reduced (NH₃, NH₄⁺) and oxidized (NOₓ, HNO₃, NO₃⁻) Nr estimates but also in understanding the dependence of Nr mixing ratios on their local meteorological conditions.Keywords: diurnal ratios, gas-aerosol interactions, spatial gradient, thermodynamic equilibrium
Procedia PDF Downloads 1281980 Air Quality Forecast Based on Principal Component Analysis-Genetic Algorithm and Back Propagation Model
Authors: Bin Mu, Site Li, Shijin Yuan
Abstract:
Under the circumstance of environment deterioration, people are increasingly concerned about the quality of the environment, especially air quality. As a result, it is of great value to give accurate and timely forecast of AQI (air quality index). In order to simplify influencing factors of air quality in a city, and forecast the city’s AQI tomorrow, this study used MATLAB software and adopted the method of constructing a mathematic model of PCA-GABP to provide a solution. To be specific, this study firstly made principal component analysis (PCA) of influencing factors of AQI tomorrow including aspects of weather, industry waste gas and IAQI data today. Then, we used the back propagation neural network model (BP), which is optimized by genetic algorithm (GA), to give forecast of AQI tomorrow. In order to verify validity and accuracy of PCA-GABP model’s forecast capability. The study uses two statistical indices to evaluate AQI forecast results (normalized mean square error and fractional bias). Eventually, this study reduces mean square error by optimizing individual gene structure in genetic algorithm and adjusting the parameters of back propagation model. To conclude, the performance of the model to forecast AQI is comparatively convincing and the model is expected to take positive effect in AQI forecast in the future.Keywords: AQI forecast, principal component analysis, genetic algorithm, back propagation neural network model
Procedia PDF Downloads 2261979 Poly (Diphenylamine-4-Sulfonic Acid) Modified Glassy Carbon Electrode for Voltammetric Determination of Gallic Acid in Honey and Peanut Samples
Authors: Zelalem Bitew, Adane Kassa, Beyene Misgan
Abstract:
In this study, a sensitive and selective voltammetric method based on poly(diphenylamine-4-sulfonic acid) modified glassy carbon electrode (poly(DPASA)/GCE) was developed for determination of gallic acid. Appearance of an irreversible oxidative peak at both bare GCE and poly(DPASA)/GCE for gallic acid with about three folds current enhancement and much reduced potential at poly(DPASA)/GCE showed catalytic property of the modifier towards oxidation of gallic acid. Under optimized conditions, Adsorptive stripping square wave voltammetric peak current response of the poly(DPASA)/GCE showed linear dependence with gallic acid concentration in the range 5.00 × 10-7 − 3.00 × 10-4 mol L-1 with limit of detection of 4.35 × 10-9. Spike recovery results between 94.62-99.63, 95.00-99.80 and 97.25-103.20% of gallic acid in honey, raw peanut, and commercial peanut butter samples respectively, interference recovery results with less than 4.11% error in the presence of uric acid and ascorbic acid, lower LOD and relatively wider dynamic range than most of the previously reported methods validated the potential applicability of the method based on poly(DPASA)/GCE for determination of gallic acid real samples including in honey and peanut samples.Keywords: gallic acid, diphenyl amine sulfonic acid, adsorptive anodic striping square wave voltammetry, honey, peanut
Procedia PDF Downloads 781978 Tracing Sources of Sediment in an Arid River, Southern Iran
Authors: Hesam Gholami
Abstract:
Elevated suspended sediment loads in riverine systems resulting from accelerated erosion due to human activities are a serious threat to the sustainable management of watersheds and ecosystem services therein worldwide. Therefore, mitigation of deleterious sediment effects as a distributed or non-point pollution source in the catchments requires reliable provenance information. Sediment tracing or sediment fingerprinting, as a combined process consisting of sampling, laboratory measurements, different statistical tests, and the application of mixing or unmixing models, is a useful technique for discriminating the sources of sediments. From 1996 to the present, different aspects of this technique, such as grouping the sources (spatial and individual sources), discriminating the potential sources by different statistical techniques, and modification of mixing and unmixing models, have been introduced and modified by many researchers worldwide, and have been applied to identify the provenance of fine materials in agricultural, rural, mountainous, and coastal catchments, and in large catchments with numerous lakes and reservoirs. In the last two decades, efforts exploring the uncertainties associated with sediment fingerprinting results have attracted increasing attention. The frameworks used to quantify the uncertainty associated with fingerprinting estimates can be divided into three groups comprising Monte Carlo simulation, Bayesian approaches and generalized likelihood uncertainty estimation (GLUE). Given the above background, the primary goal of this study was to apply geochemical fingerprinting within the GLUE framework in the estimation of sub-basin spatial sediment source contributions in the arid Mehran River catchment in southern Iran, which drains into the Persian Gulf. The accuracy of GLUE predictions generated using four different sets of statistical tests for discriminating three sub-basin spatial sources was evaluated using 10 virtual sediments (VS) samples with known source contributions using the root mean square error (RMSE) and mean absolute error (MAE). Based on the results, the contributions modeled by GLUE for the western, central and eastern sub-basins are 1-42% (overall mean 20%), 0.5-30% (overall mean 12%) and 55-84% (overall mean 68%), respectively. According to the mean absolute fit (MAF; ≥ 95% for all target sediment samples) and goodness-of-fit (GOF; ≥ 99% for all samples), our suggested modeling approach is an accurate technique to quantify the source of sediments in the catchments. Overall, the estimated source proportions can help watershed engineers plan the targeting of conservation programs for soil and water resources.Keywords: sediment source tracing, generalized likelihood uncertainty estimation, virtual sediment mixtures, Iran
Procedia PDF Downloads 741977 A Posteriori Analysis of the Spectral Element Discretization of Heat Equation
Authors: Chor Nejmeddine, Ines Ben Omrane, Mohamed Abdelwahed
Abstract:
In this paper, we present a posteriori analysis of the discretization of the heat equation by spectral element method. We apply Euler's implicit scheme in time and spectral method in space. We propose two families of error indicators, both of which are built from the residual of the equation and we prove that they satisfy some optimal estimates. We present some numerical results which are coherent with the theoretical ones.Keywords: heat equation, spectral elements discretization, error indicators, Euler
Procedia PDF Downloads 3061976 Importance of CT and Timed Barium Esophagogram in the Contemporary Treatment of Patients with Achalasia
Authors: Sanja Jovanovic, Aleksandar Simic, Ognjan Skrobic, Dragan Masulovic, Aleksandra Djuric-Stefanovic
Abstract:
Introduction: Achalasia is an idiopathic primary esophageal motility disorder characterized by esophageal peristalsis and impaired swallow-induced relaxation of the lower esophageal sphincter (LES). It is a rare disease that affects both genders with an incidence of 1/100.000 and a prevalence rate of 10/100,000 per year. Objective: Laparoscopic Heller myotomy (LHM) represents a therapy of choice for patients with achalasia, providing excellent outcomes. The aim of this study was to evaluate the significance of computed tomography (CT) in analyzing achalasia subtypes and timed barium esophagogram (TBE) in evaluation of LHM success, as a part of standardized diagnostic protocol. Method: Fifty-one patients with achalasia, confirmed by manometric studies, in addition to standardized diagnostic methods, underwent CT and TBE. CT was done with multiplanar reconstruction, measuring the wall thickness above the esophago-gastric junction in the axial plane. TBE was performed preoperatively and two days postoperatively swallowing low-density barium sulfate, and plane upright frontal films were performed 1, 2 and 5 minutes after the ingestion. In all patients, LHM was done, and pre and postoperative height and weight of the barium column were compared. Results: According to CT findings we divided patients into 3 subtypes of achalasia according to wall thickness: < 4mm as subtype one, between 4 - 9mm as II, and > 10 mm as subtype 3. Correlation of manometric results, as a reference values, and CT findings indicated CT sensitivity of 90% and specificity of 70 % in establishing subtypes of achalasia. The preoperative values of TBE at 1, 2 and 5 minutes were: median barium column height 17.4 ± 7.4, 15.9 ± 6.2 and 13.9 ± 6.2 cm; median column width 5 ± 1.5, 4.7 ± 1.6 and 4.5 ± 1.8 cm respectively. LHM significantly reduced these values (height 7 ± 4.6, 5.8 ± 4.2, 3.7 ± 3.4 cm; width 2.9 ± 1.3, 2.6 ± 1.3 and 2.4 ± 1.4 cm), indicating the quantitative estimates of emptying as excellent (p value < 0.01). Conclusion: CT has high sensitivity and specificity in evaluation of achalasia subtypes, and can be introduced as an additional method for standardized evaluation of these patients. The quantitative assessment of TBE based on measurements of the barium column is an accurate and beneficial method, which adequately estimates esophageal emptying success of LHM.Keywords: achalasia, computed tomography, esophagography, myotomy
Procedia PDF Downloads 2341975 Usage the Point Analysis Algorithm (SANN) on Drought Analysis
Authors: Khosro Shafie Motlaghi, Amir Reza Salemian
Abstract:
In arid and semi-arid regions like our country Evapotranspiration is the greatestportion of water resource. Therefor knowlege of its changing and other climate parameters plays an important role for planning, development, and management of water resource. In this search the Trend of long changing of Evapotranspiration (ET0), average temprature, monthly rainfall were tested. To dose, all synoptic station s in iran were divided according to the climate with Domarton climate. The present research was done in semi-arid climate of Iran, and in which 14 synoptic with 30 years period of statistics were investigated with 3 methods of minimum square error, Mann Kendoll, and Vald-Volfoytz Evapotranspiration was calculated by using the method of FAO-Penman. The results of investigation in periods of statistic has shown that the process Evapotranspiration parameter of 24 percent of stations is positive, and for 2 percent is negative, and for 47 percent. It was without any Trend. Similary for 22 percent of stations was positive the Trend of parameter of temperature for 19 percent , the trend was negative and for 64 percent, it was without any Trend. The results of rainfall trend has shown that the amount of rainfall in most stations was not considered as a meaningful trend. The result of Mann-kendoll method similar to minimum square error method. regarding the acquired result was can admit that in future years Some regions will face increase of temperature and Evapotranspiration.Keywords: analysis, algorithm, SANN, ET0
Procedia PDF Downloads 2961974 Observationally Constrained Estimates of Aerosol Indirect Radiative Forcing over Indian Ocean
Authors: Sofiya Rao, Sagnik Dey
Abstract:
Aerosol-cloud-precipitation interaction continues to be one of the largest sources of uncertainty in quantifying the aerosol climate forcing. The uncertainty is increasing from global to regional scale. This problem remains unresolved due to the large discrepancy in the representation of cloud processes in the climate models. Most of the studies on aerosol-cloud-climate interaction and aerosol-cloud-precipitation over Indian Ocean (like INDOEX, CAIPEEX campaign etc.) are restricted to either particular to one season or particular to one region. Here we developed a theoretical framework to quantify aerosol indirect radiative forcing using Moderate Resolution Imaging Spectroradiometer (MODIS) aerosol and cloud products of 15 years (2000-2015) period over the Indian Ocean. This framework relies on the observationally constrained estimate of the aerosol-induced change in cloud albedo. We partitioned the change in cloud albedo into the change in Liquid Water Path (LWP) and Effective Radius of Clouds (Reff) in response to an aerosol optical depth (AOD). Cloud albedo response to an increase in AOD is most sensitive in the range of LWP between 120-300 gm/m² for a range of Reff varying from 8-24 micrometer, which means aerosols are most sensitive to this range of LWP and Reff. Using this framework, aerosol forcing during a transition from indirect to semi-direct effect is also calculated. The outcome of this analysis shows best results over the Arabian Sea in comparison with the Bay of Bengal and the South Indian Ocean because of heterogeneity in aerosol spices over the Arabian Sea. Over the Arabian Sea during Winter Season the more absorbing aerosols are dominating, during Pre-monsoon dust (coarse mode aerosol particles) are more dominating. In winter and pre-monsoon majorly the aerosol forcing is more dominating while during monsoon and post-monsoon season meteorological forcing is more dominating. Over the South Indian Ocean, more or less same types of aerosol (Sea salt) are present. Over the Arabian Sea the Aerosol Indirect Radiative forcing are varying from -5 ± 4.5 W/m² for winter season while in other seasons it is reducing. The results provide observationally constrained estimates of aerosol indirect forcing in the Indian Ocean which can be helpful in evaluating the climate model performance in the context of such complex interactions.Keywords: aerosol-cloud-precipitation interaction, aerosol-cloud-climate interaction, indirect radiative forcing, climate model
Procedia PDF Downloads 1751973 Parametric Approach for Reserve Liability Estimate in Mortgage Insurance
Authors: Rajinder Singh, Ram Valluru
Abstract:
Chain Ladder (CL) method, Expected Loss Ratio (ELR) method and Bornhuetter-Ferguson (BF) method, in addition to more complex transition-rate modeling, are commonly used actuarial reserving methods in general insurance. There is limited published research about their relative performance in the context of Mortgage Insurance (MI). In our experience, these traditional techniques pose unique challenges and do not provide stable claim estimates for medium to longer term liabilities. The relative strengths and weaknesses among various alternative approaches revolve around: stability in the recent loss development pattern, sufficiency and reliability of loss development data, and agreement/disagreement between reported losses to date and ultimate loss estimate. CL method results in volatile reserve estimates, especially for accident periods with little development experience. The ELR method breaks down especially when ultimate loss ratios are not stable and predictable. While the BF method provides a good tradeoff between the loss development approach (CL) and ELR, the approach generates claim development and ultimate reserves that are disconnected from the ever-to-date (ETD) development experience for some accident years that have more development experience. Further, BF is based on subjective a priori assumption. The fundamental shortcoming of these methods is their inability to model exogenous factors, like the economy, which impact various cohorts at the same chronological time but at staggered points along their life-time development. This paper proposes an alternative approach of parametrizing the loss development curve and using logistic regression to generate the ultimate loss estimate for each homogeneous group (accident year or delinquency period). The methodology was tested on an actual MI claim development dataset where various cohorts followed a sigmoidal trend, but levels varied substantially depending upon the economic and operational conditions during the development period spanning over many years. The proposed approach provides the ability to indirectly incorporate such exogenous factors and produce more stable loss forecasts for reserving purposes as compared to the traditional CL and BF methods.Keywords: actuarial loss reserving techniques, logistic regression, parametric function, volatility
Procedia PDF Downloads 1301972 Learning the Dynamics of Articulated Tracked Vehicles
Authors: Mario Gianni, Manuel A. Ruiz Garcia, Fiora Pirri
Abstract:
In this work, we present a Bayesian non-parametric approach to model the motion control of ATVs. The motion control model is based on a Dirichlet Process-Gaussian Process (DP-GP) mixture model. The DP-GP mixture model provides a flexible representation of patterns of control manoeuvres along trajectories of different lengths and discretizations. The model also estimates the number of patterns, sufficient for modeling the dynamics of the ATV.Keywords: Dirichlet processes, gaussian mixture models, learning motion patterns, tracked robots for urban search and rescue
Procedia PDF Downloads 4491971 Video Compression Using Contourlet Transform
Authors: Delara Kazempour, Mashallah Abasi Dezfuli, Reza Javidan
Abstract:
Video compression used for channels with limited bandwidth and storage devices has limited storage capabilities. One of the most popular approaches in video compression is the usage of different transforms. Discrete cosine transform is one of the video compression methods that have some problems such as blocking, noising and high distortion inappropriate effect in compression ratio. wavelet transform is another approach is better than cosine transforms in balancing of compression and quality but the recognizing of curve curvature is so limit. Because of the importance of the compression and problems of the cosine and wavelet transforms, the contourlet transform is most popular in video compression. In the new proposed method, we used contourlet transform in video image compression. Contourlet transform can save details of the image better than the previous transforms because this transform is multi-scale and oriented. This transform can recognize discontinuity such as edges. In this approach we lost data less than previous approaches. Contourlet transform finds discrete space structure. This transform is useful for represented of two dimension smooth images. This transform, produces compressed images with high compression ratio along with texture and edge preservation. Finally, the results show that the majority of the images, the parameters of the mean square error and maximum signal-to-noise ratio of the new method based contourlet transform compared to wavelet transform are improved but in most of the images, the parameters of the mean square error and maximum signal-to-noise ratio in the cosine transform is better than the method based on contourlet transform.Keywords: video compression, contourlet transform, discrete cosine transform, wavelet transform
Procedia PDF Downloads 4431970 Mathematical and Numerical Analysis of a Nonlinear Cross Diffusion System
Authors: Hassan Al Salman
Abstract:
We consider a nonlinear parabolic cross diffusion model arising in applied mathematics. A fully practical piecewise linear finite element approximation of the model is studied. By using entropy-type inequalities and compactness arguments, existence of a global weak solution is proved. Providing further regularity of the solution of the model, some uniqueness results and error estimates are established. Finally, some numerical experiments are performed.Keywords: cross diffusion model, entropy-type inequality, finite element approximation, numerical analysis
Procedia PDF Downloads 3821969 Electrochemical Sensing of L-Histidine Based on Fullerene-C60 Mediated Gold Nanocomposite
Authors: Sanjeeb Sutradhar, Archita Patnaik
Abstract:
Histidine is one of the twenty-two naturally occurring essential amino acids exhibiting two conformations, L-histidine and D-histidine. D-Histidine is biologically inert, while L-histidine is bioactive because of its conversion to neurotransmitter or neuromodulator histamine in both brain as well as central nervous system. The deficiency of L-histidine causes serious diseases like Parkinson’s disease, epilepsy and the failure of normal erythropoiesis development. Gold nanocomposites are attractive materials due to their excellent biocompatibility and are easy to adsorb on the electrode surface. In the present investigation, hydrophobic fullerene-C60 was functionalized with homocysteine via nucleophilic addition reaction to make it hydrophilic and to successively make the nanocomposite with in-situ prepared gold nanoparticles with ascorbic acid as reducing agent. The electronic structure calculations of the AuNPs@Hcys-C60 nanocomposite showed a drastic reduction of HOMO-LUMO gap compared to the corresponding molecules of interest, indicating enhanced electron transportability to the electrode surface. In addition, the electrostatic potential map of the nanocomposite showed the charge was distributed over either end of the nanocomposite, evidencing faster direct electron transfer from nanocomposite to the electrode surface. This nanocomposite showed catalytic activity; the nanocomposite modified glassy carbon electrode showed a tenfold higher kₑt, the electron transfer rate constant than the bare glassy carbon electrode. Significant improvement in its sensing behavior by square wave voltammetry was noted.Keywords: fullerene-C60, gold nanocomposites, L-Histidine, square wave voltammetry
Procedia PDF Downloads 2501968 Financial Liberalization and Allocation of Bank Credit in Malaysia
Authors: Chow Fah Yee, Eu Chye Tan
Abstract:
The main purpose of developing a modern and sophisticated financial system is to mobilize and allocate the country’s resources for productive uses and in the process contribute to economic growth. Financial liberalization introduced in Malaysia in 1978 was said to be a step towards this goal. According to Mc-Kinnon and Shaw, the deregulation of a country’s financial system will create a more efficient and competitive market driven financial sector; with savings being channelled to the most productive users. This paper aims to assess whether financial liberalization resulted in bank credit being allocated to the more productive users, for the case of Malaysia by: firstly, using Chi-square test to if there exists a relationship between financial liberalization and bank lending in Malaysia. Secondly, to analyze on a comparative basis, the share of loans secured by 9 major economic sectors, using data on bank loans from 1975 to 2003. Lastly, present value analysis and rank correlation was used to determine if the recipients of bigger loans are the more efficient users. Chi-square test confirmed the generally observed trend of an increase in bank credit with the adoption of financial liberalization. While the comparative analysis of loans showed that the bulk of credit were allocated to service sectors, consumer loans and property related sectors, at the expense of industry. Results for rank correlation analysis showed that there is no relationship between the more productive users and amount of loans obtained. This implies that the recipients (sectors) that received more loans were not the more efficient sectors.Keywords: allocation of resources, bank credit, financial liberalization, economics
Procedia PDF Downloads 4461967 Time Parameter Based for the Detection of Catastrophic Faults in Analog Circuits
Authors: Arabi Abderrazak, Bourouba Nacerdine, Ayad Mouloud, Belaout Abdeslam
Abstract:
In this paper, a new test technique of analog circuits using time mode simulation is proposed for the single catastrophic faults detection in analog circuits. This test process is performed to overcome the problem of catastrophic faults being escaped in a DC mode test applied to the inverter amplifier in previous research works. The circuit under test is a second-order low pass filter constructed around this type of amplifier but performing a function that differs from that of the previous test. The test approach performed in this work is based on two key- elements where the first one concerns the unique square pulse signal selected as an input vector test signal to stimulate the fault effect at the circuit output response. The second element is the filter response conversion to a square pulses sequence obtained from an analog comparator. This signal conversion is achieved through a fixed reference threshold voltage of this comparison circuit. The measurement of the three first response signal pulses durations is regarded as fault effect detection parameter on one hand, and as a fault signature helping to hence fully establish an analog circuit fault diagnosis on another hand. The results obtained so far are very promising since the approach has lifted up the fault coverage ratio in both modes to over 90% and has revealed the harmful side of faults that has been masked in a DC mode test.Keywords: analog circuits, analog faults diagnosis, catastrophic faults, fault detection
Procedia PDF Downloads 4411966 Computational Analysis of Cavity Effect over Aircraft Wing
Authors: P. Booma Devi, Dilip A. Shah
Abstract:
This paper seeks the potentials of studying aerodynamic characteristics of inward cavities called dimples, as an alternative to the classical vortex generators. Increasing stalling angle is a greater challenge in wing design. But our examination is primarily focused on increasing lift. In this paper, enhancement of lift is mainly done by introduction of dimple or cavity in a wing. In general, aircraft performance can be enhanced by increasing aerodynamic efficiency that is lift to drag ratio of an aircraft wing. Efficiency improvement can be achieved by improving the maximum lift co-efficient or by reducing the drag co-efficient. At the time of landing aircraft, high angle of attack may lead to stalling of aircraft. To avoid this kind of situation, increase in the stalling angle is warranted. Hence, improved stalling characteristic is the best way to ease landing complexity. Computational analysis is done for the wing segment made of NACA 0012. Simulation is carried out for 30 m/s free stream velocity over plain airfoil and different types of cavities. The wing is modeled in CATIA V5R20 and analyses are carried out using ANSYS CFX. Triangle and square shapes are used as cavities for analysis. Simulations revealed that cavity placed on wing segment shows an increase of maximum lift co-efficient when compared to normal wing configuration. Flow separation is delayed at downstream of the wing by the presence of cavities up to a particular angle of attack.Keywords: lift, drag reduce, square dimple, triangle dimple, enhancement of stall angle
Procedia PDF Downloads 3471965 Sharp Estimates of Oscillatory Singular Integrals with Rough Kernels
Authors: H. Al-Qassem, L. Cheng, Y. Pan
Abstract:
In this paper, we establish sharp bounds for oscillatory singular integrals with an arbitrary real polynomial phase P. Our kernels are allowed to be rough both on the unit sphere and in the radial direction. We show that the bounds grow no faster than log (deg(P)), which is optimal and was first obtained by Parissis and Papadimitrakis for kernels without any radial roughness. Our results substantially improve many previously known results. Among key ingredients of our methods are an L¹→L² sharp estimate and using extrapolation.Keywords: oscillatory singular integral, rough kernel, singular integral, orlicz spaces, block spaces, extrapolation, L^{p} boundedness
Procedia PDF Downloads 4561964 On Optimum Stratification
Authors: M. G. M. Khan, V. D. Prasad, D. K. Rao
Abstract:
In this manuscript, we discuss the problem of determining the optimum stratification of a study (or main) variable based on the auxiliary variable that follows a uniform distribution. If the stratification of survey variable is made using the auxiliary variable it may lead to substantial gains in precision of the estimates. This problem is formulated as a Nonlinear Programming Problem (NLPP), which turn out to multistage decision problem and is solved using dynamic programming technique.Keywords: auxiliary variable, dynamic programming technique, nonlinear programming problem, optimum stratification, uniform distribution
Procedia PDF Downloads 3311963 Estimation of PM10 Concentration Using Ground Measurements and Landsat 8 OLI Satellite Image
Authors: Salah Abdul Hameed Saleh, Ghada Hasan
Abstract:
The aim of this work is to produce an empirical model for the determination of particulate matter (PM10) concentration in the atmosphere using visible bands of Landsat 8 OLI satellite image over Kirkuk city- IRAQ. The suggested algorithm is established on the aerosol optical reflectance model. The reflectance model is a function of the optical properties of the atmosphere, which can be related to its concentrations. The concentration of PM10 measurements was collected using Particle Mass Profiler and Counter in a Single Handheld Unit (Aerocet 531) meter simultaneously by the Landsat 8 OLI satellite image date. The PM10 measurement locations were defined by a handheld global positioning system (GPS). The obtained reflectance values for visible bands (Coastal aerosol, Blue, Green and blue bands) of landsat 8 OLI image were correlated with in-suite measured PM10. The feasibility of the proposed algorithms was investigated based on the correlation coefficient (R) and root-mean-square error (RMSE) compared with the PM10 ground measurement data. A choice of our proposed multispectral model was founded on the highest value correlation coefficient (R) and lowest value of the root mean square error (RMSE) with PM10 ground data. The outcomes of this research showed that visible bands of Landsat 8 OLI were capable of calculating PM10 concentration with an acceptable level of accuracy.Keywords: air pollution, PM10 concentration, Lansat8 OLI image, reflectance, multispectral algorithms, Kirkuk area
Procedia PDF Downloads 4421962 Downscaling Daily Temperature with Neuroevolutionary Algorithm
Authors: Min Shi
Abstract:
State of the art research with Artificial Neural Networks for the downscaling of General Circulation Models (GCMs) mainly uses back-propagation algorithm as a training approach. This paper introduces another training approach of ANNs, Evolutionary Algorithm. The combined algorithm names neuroevolutionary (NE) algorithm. We investigate and evaluate the use of the NE algorithms in statistical downscaling by generating temperature estimates at interior points given information from a lattice of surrounding locations. The results of our experiments indicate that NE algorithms can be efficient alternative downscaling methods for daily temperatures.Keywords: temperature, downscaling, artificial neural networks, evolutionary algorithms
Procedia PDF Downloads 3491961 Parametric Study on Dynamic Analysis of Composite Laminated Plate
Authors: Junaid Kameran Ahmed
Abstract:
A laminated plate composite of graphite/epoxy has been analyzed dynamically in the present work by using a quadratic element (8-node diso-parametric), and by depending on 1st order shear deformation theory, every node in this element has 6-degrees of freedom (displacement in x, y, and z axis and twist about x, y, and z axis). The dynamic analysis in the present work covered parametric studies on a composite laminated plate (square plate) to determine its effect on the natural frequency of the plate. The parametric study is represented by set of changes (plate thickness, number of layers, support conditions, layer orientation), and the plates have been simulated by using ANSYS package 12. The boundary conditions considered in this study, at all four edges of the plate, are simply supported and fixed boundary condition. The results obtained from ANSYS program show that the natural frequency for both fixed and simply supported increases with increasing the number of layers, but this increase in the natural frequency for the first five modes will be neglected after 10 layers. And it is observed that the natural frequency of a composite laminated plate will change with the change of ply orientation, the natural frequency increases and it will be at maximum with angle 45 of ply for simply supported laminated plate, and maximum natural frequency will be with cross-ply (0/90) for fixed laminated composite plate. It is also observed that the natural frequency increase is approximately doubled when the thickness is doubled.Keywords: laminated plate, orthotropic plate, square plate, natural frequency (free vibration), composite (graphite / epoxy)
Procedia PDF Downloads 3481960 A Sensor Placement Methodology for Chemical Plants
Authors: Omid Ataei Nia, Karim Salahshoor
Abstract:
In this paper, a new precise and reliable sensor network methodology is introduced for unit processes and operations using the Constriction Coefficient Particle Swarm Optimization (CPSO) method. CPSO is introduced as a new search engine for optimal sensor network design purposes. Furthermore, a Square Root Unscented Kalman Filter (SRUKF) algorithm is employed as a new data reconciliation technique to enhance the stability and accuracy of the filter. The proposed design procedure incorporates precision, cost, observability, reliability together with importance-of-variables (IVs) as a novel measure in Instrumentation Criteria (IC). To the best of our knowledge, no comprehensive approach has yet been proposed in the literature to take into account the importance of variables in the sensor network design procedure. In this paper, specific weight is assigned to each sensor, measuring a process variable in the sensor network to indicate the importance of that variable over the others to cater to the ultimate sensor network application requirements. A set of distinct scenarios has been conducted to evaluate the performance of the proposed methodology in a simulated Continuous Stirred Tank Reactor (CSTR) as a highly nonlinear process plant benchmark. The obtained results reveal the efficacy of the proposed method, leading to significant improvement in accuracy with respect to other alternative sensor network design approaches and securing the definite allocation of sensors to the most important process variables in sensor network design as a novel achievement.Keywords: constriction coefficient PSO, importance of variable, MRMSE, reliability, sensor network design, square root unscented Kalman filter
Procedia PDF Downloads 1601959 A Regression Model for Predicting Sugar Crystal Size in a Fed-Batch Vacuum Evaporative Crystallizer
Authors: Sunday B. Alabi, Edikan P. Felix, Aniediong M. Umo
Abstract:
Crystal size distribution is of great importance in the sugar factories. It determines the market value of granulated sugar and also influences the cost of production of sugar crystals. Typically, sugar is produced using fed-batch vacuum evaporative crystallizer. The crystallization quality is examined by crystal size distribution at the end of the process which is quantified by two parameters: the average crystal size of the distribution in the mean aperture (MA) and the width of the distribution of the coefficient of variation (CV). Lack of real-time measurement of the sugar crystal size hinders its feedback control and eventual optimisation of the crystallization process. An attractive alternative is to use a soft sensor (model-based method) for online estimation of the sugar crystal size. Unfortunately, the available models for sugar crystallization process are not suitable as they do not contain variables that can be measured easily online. The main contribution of this paper is the development of a regression model for estimating the sugar crystal size as a function of input variables which are easy to measure online. This has the potential to provide real-time estimates of crystal size for its effective feedback control. Using 7 input variables namely: initial crystal size (Lo), temperature (T), vacuum pressure (P), feed flowrate (Ff), steam flowrate (Fs), initial super-saturation (S0) and crystallization time (t), preliminary studies were carried out using Minitab 14 statistical software. Based on the existing sugar crystallizer models, and the typical ranges of these 7 input variables, 128 datasets were obtained from a 2-level factorial experimental design. These datasets were used to obtain a simple but online-implementable 6-input crystal size model. It seems the initial crystal size (Lₒ) does not play a significant role. The goodness of the resulting regression model was evaluated. The coefficient of determination, R² was obtained as 0.994, and the maximum absolute relative error (MARE) was obtained as 4.6%. The high R² (~1.0) and the reasonably low MARE values are an indication that the model is able to predict sugar crystal size accurately as a function of the 6 easy-to-measure online variables. Thus, the model can be used as a soft sensor to provide real-time estimates of sugar crystal size during sugar crystallization process in a fed-batch vacuum evaporative crystallizer.Keywords: crystal size, regression model, soft sensor, sugar, vacuum evaporative crystallizer
Procedia PDF Downloads 2081958 Prevalence Of Listeria And Salmonella Contamination In Fda Recalled Foods
Authors: Oluwatofunmi Musa-Ajakaiye, Paul Olorunfemi M.D MPH, John Obafaiye
Abstract:
Introduction: The U.S Food and Drug Administration (FDA) reports the public notices for recalled FDA-regulated products over periods of time. It study reviewed the primary reasons for recalls of products of various types over a period of 7 years. Methods: The study analyzed data provided in the FDA’s archived recalls for the years 2010-2017. It identified the various reasons for product recalls in the categories of foods, beverages, drugs, medical devices, animal and veterinary products, and dietary supplements. Using SPSS version 29, descriptive statistics and chi-square analysis of the data were performed. Results (numbers, percentages, p-values, chi-square): Over the period of analysis, a total of 931 recalls were reported. The most frequent reason for recalls was undeclared products (36.7%). The analysis showed that the most recalled product type in the data set was foods and beverages, representing 591 of all recalled products (63.5%).In addition, it was observed that foods and beverages represent 77.2% of products recalled due to the presence of microorganisms. Also, a sub-group analysis of recall reasons of food and beverages found that the most prevalent reason for such recalls was undeclared products (50.1%) followed by Listeria (17.3%) then Salmonella (13.2%). Conclusion: This analysis shows that foods and beverages have the greatest percentages of total recalls due to the presence of undeclared products listeria contamination and Salmonella contamination. The prevalence of Salmonella and Listeria contamination suggests that there is a high risk of microbial contamination in FDA-approved products and further studies on the effects of such contamination must be conducted to ensure consumer safety.Keywords: food, beverages, listeria, salmonella, FDA, contamination, microbial
Procedia PDF Downloads 631957 The Generalized Pareto Distribution as a Model for Sequential Order Statistics
Authors: Mahdy Esmailian, Mahdi Doostparast, Ahmad Parsian
Abstract:
In this article, sequential order statistics (SOS) censoring type II samples coming from the generalized Pareto distribution are considered. Maximum likelihood (ML) estimators of the unknown parameters are derived on the basis of the available multiple SOS data. Necessary conditions for existence and uniqueness of the derived ML estimates are given. Due to complexity in the proposed likelihood function, a useful re-parametrization is suggested. For illustrative purposes, a Monte Carlo simulation study is conducted and an illustrative example is analysed.Keywords: bayesian estimation, generalized pareto distribution, maximum likelihood estimation, sequential order statistics
Procedia PDF Downloads 509