Search results for: linear acceleration method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 21196

Search results for: linear acceleration method

18886 Chikungunya Virus Detection Utilizing an Origami Based Electrochemical Paper Analytical Device

Authors: Pradakshina Sharma, Jagriti Narang

Abstract:

Due to the critical significance in the early identification of infectious diseases, electrochemical sensors have garnered considerable interest. Here, we develop a detection platform for the chikungunya virus by rationally implementing the extremely high charge-transfer efficiency of a ternary nanocomposite of graphene oxide, silver, and gold (G/Ag/Au) (CHIKV). Because paper is an inexpensive substrate and can be produced in large quantities, the use of electrochemical paper analytical device (EPAD) origami further enhances the sensor's appealing qualities. A cost-effective platform for point-of-care diagnostics is provided by paper-based testing. These types of sensors are referred to as eco-designed analytical tools due to their efficient production, usage of the eco-friendly substrate, and potential to reduce waste management after measuring by incinerating the sensor. In this research, the paper's foldability property has been used to develop and create 3D multifaceted biosensors that can specifically detect the CHIKVX-ray diffraction, scanning electron microscopy, UV-vis spectroscopy, and transmission electron microscopy (TEM) were used to characterize the produced nanoparticles. In this work, aptamers are used since they are thought to be a unique and sensitive tool for use in rapid diagnostic methods. Cyclic voltammetry (CV) and linear sweep voltammetry (LSV), which were both validated with a potentiostat, were used to measure the analytical response of the biosensor. The target CHIKV antigen was hybridized with using the aptamer-modified electrode as a signal modulation platform, and its presence was determined by a decline in the current produced by its interaction with an anionic mediator, Methylene Blue (MB). Additionally, a detection limit of 1ng/ml and a broad linear range of 1ng/ml-10µg/ml for the CHIKV antigen were reported.

Keywords: biosensors, ePAD, arboviral infections, point of care

Procedia PDF Downloads 88
18885 Colour Formation and Maillard Reactions in Spray-Dried Milk Powders

Authors: Zelin Zhou, Timothy Langrish

Abstract:

Spray drying is the final stage of milk powder production. Traditionally, the quality of spray-dried milk powders has mainly been assessed using their physical properties, such as their moisture contents, while chemical changes occurring during the spray drying process have often been ignored. With growing concerns about food quality, it is necessary to establish a better understanding of heat-induced degradation due to the spray-drying process of skim milk. In this study, the extent of thermal degradation for skim milk in a pilot-scale spray dryer has been investigated using different inlet gas temperatures. The extent of heat-induced damage has been measured by the formation of advanced Maillard reaction products and the loss of soluble proteins at pH 4.6 as assessed by a fluorometric method. A significant increase in the extent of thermal degradation has been found when the inlet gas temperature increased from 170°C to 190°C, suggesting protein unfolding may play an important role in the kinetics of heat-induced degradation for milk in spray dryers. Colour changes of the spray-dried skim milk powders have also been analysed using a standard lighting box. Colourimetric analysis results were expressed in CIELAB colour space with the use of the E index (E) and the Chroma (C) for measuring the difference between colours and the intensity of the colours. A strong linear correlation between the colour intensity of the spray-dried skim milk powders and the formation of advanced Maillard reaction products has been observed.

Keywords: colour formation, Maillard reactions, spray drying, skim milk powder

Procedia PDF Downloads 176
18884 Analyzing of Speed Disparity in Mixed Vehicle Technologies on Horizontal Curves

Authors: Tahmina Sultana, Yasser Hassan

Abstract:

Vehicle technologies rapidly evolving due to their multifaceted advantages. Adapted different vehicle technologies like connectivity and automation on the same roads with conventional vehicles controlled by human drivers may increase speed disparity in mixed vehicle technologies. Identifying relationships between speed distribution measures of different vehicles and road geometry can be an indicator of speed disparity in mixed technologies. Previous studies proved that speed disparity measures and traffic accidents are inextricably related. Horizontal curves from three geographic areas were selected based on relevant criteria, and speed data were collected at the midpoint of the preceding tangent and starting, ending, and middle point of the curve. Multiple linear mixed effect models (LME) were developed using the instantaneous speed measures representing the speed of vehicles at different points of horizontal curves to recognize relationships between speed variance (standard deviation) and road geometry. A simulation-based framework (Monte Carlo) was introduced to check the speed disparity on horizontal curves in mixed vehicle technologies when consideration is given to the interactions among connected vehicles (CVs), autonomous vehicles (AVs), and non-connected vehicles (NCVs) on horizontal curves. The Monte Carlo method was used in the simulation to randomly sample values for the various parameters from their respective distributions. Theresults show that NCVs had higher speed variation than CVs and AVs. In addition, AVs and CVs contributed to reduce speed disparity in the mixed vehicle technologies in any penetration rates.

Keywords: autonomous vehicles, connected vehicles, non-connected vehicles, speed variance

Procedia PDF Downloads 142
18883 Quality Assessment of New Zealand Mānuka Honeys Using Hyperspectral Imaging Combined with Deep 1D-Convolutional Neural Networks

Authors: Hien Thi Dieu Truong, Mahmoud Al-Sarayreh, Pullanagari Reddy, Marlon M. Reis, Richard Archer

Abstract:

New Zealand mānuka honey is a honeybee product derived mainly from Leptospermum scoparium nectar. The potent antibacterial activity of mānuka honey derives principally from methylglyoxal (MGO), in addition to the hydrogen peroxide and other lesser activities present in all honey. MGO is formed from dihydroxyacetone (DHA) unique to L. scoparium nectar. Mānuka honey also has an idiosyncratic phenolic profile that is useful as a chemical maker. Authentic mānuka honey is highly valuable, but almost all honey is formed from natural mixtures of nectars harvested by a hive over a time period. Once diluted by other nectars, mānuka honey irrevocably loses value. We aimed to apply hyperspectral imaging to honey frames before bulk extraction to minimise the dilution of genuine mānuka by other honey and ensure authenticity at the source. This technology is non-destructive and suitable for an industrial setting. Chemometrics using linear Partial Least Squares (PLS) and Support Vector Machine (SVM) showed limited efficacy in interpreting chemical footprints due to large non-linear relationships between predictor and predictand in a large sample set, likely due to honey quality variability across geographic regions. Therefore, an advanced modelling approach, one-dimensional convolutional neural networks (1D-CNN), was investigated for analysing hyperspectral data for extraction of biochemical information from honey. The 1D-CNN model showed superior prediction of honey quality (R² = 0.73, RMSE = 2.346, RPD= 2.56) to PLS (R² = 0.66, RMSE = 2.607, RPD= 1.91) and SVM (R² = 0.67, RMSE = 2.559, RPD=1.98). Classification of mono-floral manuka honey from multi-floral and non-manuka honey exceeded 90% accuracy for all models tried. Overall, this study reveals the potential of HSI and deep learning modelling for automating the evaluation of honey quality in frames.

Keywords: mānuka honey, quality, purity, potency, deep learning, 1D-CNN, chemometrics

Procedia PDF Downloads 131
18882 The Impact of Misogyny on Women's Leadership in the Local Sphere of Government: The Case of Dr. Kenneth Kaunda District Municipality

Authors: Josephine Eghonghon Ahiante, Barry Hanyane

Abstract:

To give effect to the constitutional rights of gender equality, the South African government instituted various legislative policy frameworks and legislations to equalise the public service. Nonetheless, gender inequality in senior management positions remains a rift in government institutions, particularly the local sphere of government. The methodology for gathering and analysing data for this study was based on both primary and secondary data sources, namely literature review, qualitative and quantitative data collection and analysis, triangulation, and inductive and deductive thematic analysis. The study found that misogynist tendencies which are manifest in organisational culture suffocate the good intentions of government in ensuring social justices, leadership diversity, and women equality. It also demonstrates that traditional gender role expectation still informs the ground in which senior management positions are allocated, men perceive women as non-leadership fit and discriminate against them during recruitment, selection, and promotion into high positions. The analyses from the study portray that, while government legislation and framework has been instrumental in the leadership acceleration of women, much more has to be done to deconstruct internalised leadership stereotypes on women's gender roles and leadership requirements. The study recommends that gender bias training intervention is needed to teach public employees on management excellence.

Keywords: gender, leadership, misogyny, orgnisational cultural, patriachy

Procedia PDF Downloads 143
18881 Factors That Influence Choice of Walking Mode in Work Trips: Case Study of Rasht, Iran

Authors: Nima Safaei, Arezoo Masoud, Babak Safaei

Abstract:

In recent years, there has been a growing emphasis on the role of urban planning in walking capability and the effects of individual and socioeconomic factors on the physical activity levels of city dwellers. Although considerable number of studies are conducted about walkability and for identifying the effective factors in walking mode choice in developed countries, to our best knowledge, literature lacks in the study of factors affecting choice of walking mode in developing countries. Due to the high importance of health aspects of human societies and in order to make insights and incentives for reducing traffic during rush hours, many researchers and policy makers in the field of transportation planning have devoted much attention to walkability studies; they have tried to improve the effective factors in the choice of walking mode in city neighborhoods. In this study, effective factors in walkability that have proven to have significant impact on the choice of walking mode, are studied at the same time in work trips. The data for the study is collected from the employees in their workplaces by well-instructed people using questionnaires; the statistical population of the study consists of 117 employed people who commute daily from work to home in Rasht city of Iran during the beginning of spring 2015. Results of the study which are found through the linear regression modeling, show that people who do not have freedom of choice for choosing their living locations and need to be present at their workplaces in certain hours have lower levels of walking. Additionally, unlike some of the previous studies which were conducted in developed countries, coincidental effects of Body Mass Index (BMI) and the income level of employees, do not have a significant effect on the walking level in work travels.

Keywords: BMI, linear regression, transportation, walking, work trips

Procedia PDF Downloads 191
18880 Modelling and Control of Binary Distillation Column

Authors: Narava Manose

Abstract:

Distillation is a very old separation technology for separating liquid mixtures that can be traced back to the chemists in Alexandria in the first century A. D. Today distillation is the most important industrial separation technology. By the eleventh century, distillation was being used in Italy to produce alcoholic beverages. At that time, distillation was probably a batch process based on the use of just a single stage, the boiler. The word distillation is derived from the Latin word destillare, which means dripping or trickling down. By at least the sixteenth century, it was known that the extent of separation could be improved by providing multiple vapor-liquid contacts (stages) in a so called Rectifactorium. The term rectification is derived from the Latin words rectefacere, meaning to improve. Modern distillation derives its ability to produce almost pure products from the use of multi-stage contacting. Throughout the twentieth century, multistage distillation was by far the most widely used industrial method for separating liquid mixtures of chemical components.The basic principle behind this technique relies on the different boiling temperatures for the various components of the mixture, allowing the separation between the vapor from the most volatile component and the liquid of other(s) component(s). •Developed a simple non-linear model of a binary distillation column using Skogestad equations in Simulink. •We have computed the steady-state operating point around which to base our analysis and controller design. However, the model contains two integrators because the condenser and reboiler levels are not controlled. One particular way of stabilizing the column is the LV-configuration where we use D to control M_D, and B to control M_B; such a model is given in cola_lv.m where we have used two P-controllers with gains equal to 10.

Keywords: modelling, distillation column, control, binary distillation

Procedia PDF Downloads 270
18879 Temporal Estimation of Hydrodynamic Parameter Variability in Constructed Wetlands

Authors: Mohammad Moezzibadi, Isabelle Charpentier, Adrien Wanko, Robert Mosé

Abstract:

The calibration of hydrodynamic parameters for subsurface constructed wetlands (CWs) is a sensitive process since highly non-linear equations are involved in unsaturated flow modeling. CW systems are engineered systems designed to favour natural treatment processes involving wetland vegetation, soil, and their microbial flora. Their significant efficiency at reducing the ecological impact of urban runoff has been recently proved in the field. Numerical flow modeling in a vertical variably saturated CW is here carried out by implementing the Richards model by means of a mixed hybrid finite element method (MHFEM), particularly well adapted to the simulation of heterogeneous media, and the van Genuchten-Mualem parametrization. For validation purposes, MHFEM results were compared to those of HYDRUS (a software based on a finite element discretization). As van Genuchten-Mualem soil hydrodynamic parameters depend on water content, their estimation is subject to considerable experimental and numerical studies. In particular, the sensitivity analysis performed with respect to the van Genuchten-Mualem parameters reveals a predominant influence of the shape parameters α, n and the saturated conductivity of the filter on the piezometric heads, during saturation and desaturation. Modeling issues arise when the soil reaches oven-dry conditions. A particular attention should also be brought to boundary condition modeling (surface ponding or evaporation) to be able to tackle different sequences of rainfall-runoff events. For proper parameter identification, large field datasets would be needed. As these are usually not available, notably due to the randomness of the storm events, we thus propose a simple, robust and low-cost numerical method for the inverse modeling of the soil hydrodynamic properties. Among the methods, the variational data assimilation technique introduced by Le Dimet and Talagrand is applied. To that end, a variational data assimilation technique is implemented by applying automatic differentiation (AD) to augment computer codes with derivative computations. Note that very little effort is needed to obtain the differentiated code using the on-line Tapenade AD engine. Field data are collected for a three-layered CW located in Strasbourg (Alsace, France) at the water edge of the urban water stream Ostwaldergraben, during several months. Identification experiments are conducted by comparing measured and computed piezometric head by means of the least square objective function. The temporal variability of hydrodynamic parameter is then assessed and analyzed.

Keywords: automatic differentiation, constructed wetland, inverse method, mixed hybrid FEM, sensitivity analysis

Procedia PDF Downloads 156
18878 The Effect of Conservative Tillage on Physical Properties of Soil and Yield of Rainfed Wheat

Authors: Abolfazl Hedayatipoor, Mohammad Younesi Alamooti

Abstract:

In order to study the effect of conservative tillage on a number of physical properties of soil and the yield of rainfed wheat, an experiment in the form of a randomized complete block design (RCBD) with three replications was conducted in a field in Aliabad County, Iran. The study treatments included: T1) Conventional method, T2) Combined moldboard plow method, T3) Chisel-packer method, and T4) Direct planting method. During early October, the study soil was prepared based on these treatments in a field which was used for rainfed wheat farming in the previous year. The apparent specific gravity of soil, weighted mean diameter (WMD) of soil aggregates, soil mechanical resistance, and soil permeability were measured. Data were analyzed in MSTAT-C. Results showed that the tillage practice had no significant effect on grain yield (p < 0.05). Soil permeability was 10.9, 16.3, 15.7 and 17.9 mm/h for T1, T2, T3 and T4, respectively.

Keywords: rainfed agriculture, conservative tillage, energy consumption, wheat

Procedia PDF Downloads 202
18877 Quality by Design in the Optimization of a Fast HPLC Method for Quantification of Hydroxychloroquine Sulfate

Authors: Pedro J. Rolim-Neto, Leslie R. M. Ferraz, Fabiana L. A. Santos, Pablo A. Ferreira, Ricardo T. L. Maia-Jr., Magaly A. M. Lyra, Danilo A F. Fonte, Salvana P. M. Costa, Amanda C. Q. M. Vieira, Larissa A. Rolim

Abstract:

Initially developed as an antimalarial agent, hydroxychloroquine (HCQ) sulfate is often used as a slow-acting antirheumatic drug in the treatment of disorders of connective tissue. The United States Pharmacopeia (USP) 37 provides a reversed-phase HPLC method for quantification of HCQ. However, this method was not reproducible, producing asymmetric peaks in a long analysis time. The asymmetry of the peak may cause an incorrect calculation of the concentration of the sample. Furthermore, the analysis time is unacceptable, especially regarding the routine of a pharmaceutical industry. The aiming of this study was to develop a fast, easy and efficient method for quantification of HCQ sulfate by High Performance Liquid Chromatography (HPLC) based on the Quality by Design (QbD) methodology. This method was optimized in terms of peak symmetry using the surface area graphic as the Design of Experiments (DoE) and the tailing factor (TF) as an indicator to the Design Space (DS). The reference method used was that described at USP 37 to the quantification of the drug. For the optimized method, was proposed a 33 factorial design, based on the QbD concepts. The DS was created with the TF (in a range between 0.98 and 1.2) in order to demonstrate the ideal analytical conditions. Changes were made in the composition of the USP mobile-phase (USP-MP): USP-MP: Methanol (90:10 v/v, 80:20 v/v and 70:30 v/v), in the flow (0.8, 1.0 and 1.2 mL) and in the oven temperature (30, 35, and 40ºC). The USP method allowed the quantification of drug in a long time (40-50 minutes). In addition, the method uses a high flow rate (1,5 mL.min-1) which increases the consumption of expensive solvents HPLC grade. The main problem observed was the TF value (1,8) that would be accepted if the drug was not a racemic mixture, since the co-elution of the isomers can become an unreliable peak integration. Therefore, the optimization was suggested in order to reduce the analysis time, aiming a better peak resolution and TF. For the optimization method, by the analysis of the surface-response plot it was possible to confirm the ideal setting analytical condition: 45 °C, 0,8 mL.min-1 and 80:20 USP-MP: Methanol. The optimized HPLC method enabled the quantification of HCQ sulfate, with a peak of high resolution, showing a TF value of 1,17. This promotes good co-elution of isomers of the HCQ, ensuring an accurate quantification of the raw material as racemic mixture. This method also proved to be 18 times faster, approximately, compared to the reference method, using a lower flow rate, reducing even more the consumption of the solvents and, consequently, the analysis cost. Thus, an analytical method for the quantification of HCQ sulfate was optimized using QbD methodology. This method proved to be faster and more efficient than the USP method, regarding the retention time and, especially, the peak resolution. The higher resolution in the chromatogram peaks supports the implementation of the method for quantification of the drug as racemic mixture, not requiring the separation of isomers.

Keywords: analytical method, hydroxychloroquine sulfate, quality by design, surface area graphic

Procedia PDF Downloads 633
18876 The Effect of Second Victim-Related Distress on Work-Related Outcomes in Tertiary Care, Kelantan, Malaysia

Authors: Ahmad Zulfahmi Mohd Kamaruzaman, Mohd Ismail Ibrahim, Ariffin Marzuki Mokhtar, Maizun Mohd Zain, Saiful Nazri Satiman, Mohd Najib Majdi Yaacob

Abstract:

Background: Aftermath any patient safety incidents, the involved healthcare providers possibly sustained second victim-related distress (second victim distress and reduced their professional efficacy), with subsequent negative work-related outcomes or vice versa cultivating resilience. This study aimed to investigate the factors affecting negative work-related outcomes and resilience, with the triad of support; colleague, supervisor, and institutional support as the hypothetical mediators. Methods: This was a cross sectional study recruiting a total of 733 healthcare providers from three tertiary care in Kelantan, Malaysia. Three steps of hierarchical linear regression were developed for each outcome; negative work-related outcomes and resilience. Then, four multiple mediator models of support triad were analyzed. Results: Second victim distress, professional efficacy, and the support triad contributed significantly for each regression model. In the pathway of professional efficacy on each negative work-related outcomes and resilience, colleague support partially mediated the relationship. As for second victim distress on negative work related outcomes, colleague and supervisor support were the partial mediator, and on resilience; all support triad also produced a similar effect. Conclusion: Second victim distress, professional efficacy, and the support triad influenced the relationship with the negative work-related outcomes and resilience. Support triad as the mediators ameliorated the effect in between and explained the urgency of having good support for recovery post encountering patient safety incidents.

Keywords: second victims, patient safety incidents, hierarchical linear regression, mediation, support

Procedia PDF Downloads 99
18875 Determination of Cohesive Zone Model’s Parameters Based On the Uniaxial Stress-Strain Curve

Authors: Y. J. Wang, C. Q. Ru

Abstract:

A key issue of cohesive zone models is how to determine the cohesive zone model (CZM) parameters based on real material test data. In this paper, uniaxial nominal stress-strain curve (SS curve) is used to determine two key parameters of a cohesive zone model: the maximum traction and the area under the curve of traction-separation law (TSL). To this end, the true SS curve is obtained based on the nominal SS curve, and the relationship between the nominal SS curve and TSL is derived based on an assumption that the stress for cracking should be the same in both CZM and the real material. In particular, the true SS curve after necking is derived from the nominal SS curve by taking the average of the power law extrapolation and the linear extrapolation, and a damage factor is introduced to offset the true stress reduction caused by the voids generated at the necking zone. The maximum traction of the TSL is equal to the maximum true stress calculated based on the damage factor at the end of hardening. In addition, a simple specimen is simulated by Abaqus/Standard to calculate the critical J-integral, and the fracture energy calculated by the critical J-integral represents the stored strain energy in the necking zone calculated by the true SS curve. Finally, the CZM parameters obtained by the present method are compared to those used in a previous related work for a simulation of the drop-weight tear test.

Keywords: dynamic fracture, cohesive zone model, traction-separation law, stress-strain curve, J-integral

Procedia PDF Downloads 507
18874 Development of In Situ Permeability Test Using Constant Discharge Method for Sandy Soils

Authors: A. Rifa’i, Y. Takeshita, M. Komatsu

Abstract:

The post-rain puddles problem that occurs in the first yard of Prambanan Temple are often disturbing visitor activity. A poodle layer and a drainage system has ever built to avoid such a problem, but puddles still didn’t stop appearing after rain. Permeability parameter needs to be determined by using more simple procedure to find exact method of solution. The instrument modelling were proposed according to the development of field permeability testing instrument. This experiment used proposed Constant Discharge method. Constant Discharge method used a tube poured with constant water flow. The procedure were carried out from unsaturated until saturated soil condition. Volumetric water content (θ) were being monitored by soil moisture measurement device. The results were relationship between k and θ which drawn by numerical approach Van Genutchen model. Parameters θr optimum value obtained from the test was at very dry soil. Coefficient of permeability with a density of 19.8 kN/m3 for unsaturated conditions was in range of 3 x 10-6 cm/sec (Sr= 68 %) until 9.98 x 10-4 cm/sec (Sr= 82 %). The equipment and testing procedure developed in this research was quite effective, simple and easy to be implemented on determining field soil permeability coefficient value of sandy soil. Using constant discharge method in proposed permeability test, value of permeability coefficient under unsaturated condition can be obtained without establish soil water characteristic curve.

Keywords: constant discharge method, in situ permeability test, sandy soil, unsaturated conditions

Procedia PDF Downloads 375
18873 Numerical Modelling of Dry Stone Masonry Structures Based on Finite-Discrete Element Method

Authors: Ž. Nikolić, H. Smoljanović, N. Živaljić

Abstract:

This paper presents numerical model based on finite-discrete element method for analysis of the structural response of dry stone masonry structures under static and dynamic loads. More precisely, each discrete stone block is discretized by finite elements. Material non-linearity including fracture and fragmentation of discrete elements as well as cyclic behavior during dynamic load are considered through contact elements which are implemented within a finite element mesh. The application of the model was conducted on several examples of these structures. The performed analysis shows high accuracy of the numerical results in comparison with the experimental ones and demonstrates the potential of the finite-discrete element method for modelling of the response of dry stone masonry structures.

Keywords: dry stone masonry structures, dynamic load, finite-discrete element method, static load

Procedia PDF Downloads 407
18872 Combining the Fictitious Stress Method and Displacement Discontinuity Method in Solving Crack Problems in Anisotropic Material

Authors: Bahatti̇n Ki̇mençe, Uğur Ki̇mençe

Abstract:

In this study, the purpose of obtaining the influence functions of the displacement discontinuity in an anisotropic elastic medium is to produce the boundary element equations. A Displacement Discontinuous Method formulation (DDM) is presented with the aim of modeling two-dimensional elastic fracture problems. This formulation is found by analytical integration of the fundamental solution along a straight-line crack. With this purpose, Kelvin's fundamental solutions for anisotropic media on an infinite plane are used to form dipoles from singular loads, and the various combinations of the said dipoles are used to obtain the influence functions of displacement discontinuity. This study introduces a technique for coupling Fictitious Stress Method (FSM) and DDM; the reason for applying this technique to some examples is to demonstrate the effectiveness of the proposed coupling method. In this study, displacement discontinuity equations are obtained by using dipole solutions calculated with known singular force solutions in an anisotropic medium. The displacement discontinuities method obtained from the solutions of these equations and the fictitious stress methods is combined and compared with various examples. In this study, one or more crack problems with various geometries in rectangular plates in finite and infinite regions, under the effect of tensile stress with coupled FSM and DDM in the anisotropic environment, were examined, and the effectiveness of the coupled method was demonstrated. Since crack problems can be modeled more easily with DDM, it has been observed that the use of DDM has increased recently. In obtaining the displacement discontinuity equations, Papkovitch functions were used in Crouch, and harmonic functions were chosen to satisfy various boundary conditions. A comparison is made between two indirect boundary element formulations, DDM, and an extension of FSM, for solving problems involving cracks. Several numerical examples are presented, and the outcomes are contrasted to existing analytical or reference outs.

Keywords: displacement discontinuity method, fictitious stress method, crack problems, anisotropic material

Procedia PDF Downloads 72
18871 A Novel Combination Method for Computing the Importance Map of Image

Authors: Ahmad Absetan, Mahdi Nooshyar

Abstract:

The importance map is an image-based measure and is a core part of the resizing algorithm. Importance measures include image gradients, saliency and entropy, as well as high level cues such as face detectors, motion detectors and more. In this work we proposed a new method to calculate the importance map, the importance map is generated automatically using a novel combination of image edge density and Harel saliency measurement. Experiments of different type images demonstrate that our method effectively detects prominent areas can be used in image resizing applications to aware important areas while preserving image quality.

Keywords: content-aware image resizing, visual saliency, edge density, image warping

Procedia PDF Downloads 576
18870 Optimization of Palm Oil Plantation Revitalization in North Sumatera

Authors: Juliza Hidayati, Sukardi, Ani Suryani, Sugiharto, Anas M. Fauzi

Abstract:

The idea of making North Sumatera as a barometer of national oil palm industry requires efforts commodities and agro-industry development of oil palm. One effort that can be done is by successful execution plantation revitalization. The plantation Revitalization is an effort to accelerate the development of smallholder plantations, through expansion and replanting by help of palm Estate Company as business partner and bank financed plantation revitalization fund. Business partner agreement obliged and bound to make at least the same smallholder plantation productivity with business partners, so that the refund rate to banks become larger and prosperous people as a plantation owner. Generally low productivity of smallholder plantations under normal potential caused a lot of old and damaged plants with plant material at random. The purpose of revitalizing oil palm plantations is which are to increase their competitiveness through increased farm productivity. The research aims to identify potential criteria in influencing plantation productivity improvement priorities to be observed and followed up in order to improve the competitiveness of destinations and make North Sumatera barometer of national palm oil can be achieved. Research conducted with Analytical Network Process (ANP), to find the effect of dependency relationships between factors or criteria with the knowledge of the experts in order to produce an objective opinion and relevant depict the actual situation.

Keywords: palm barometer, acceleration of plantation development, productivity, revitalization

Procedia PDF Downloads 669
18869 Speedup Breadth-First Search by Graph Ordering

Authors: Qiuyi Lyu, Bin Gong

Abstract:

Breadth-First Search(BFS) is a core graph algorithm that is widely used for graph analysis. As it is frequently used in many graph applications, improve the BFS performance is essential. In this paper, we present a graph ordering method that could reorder the graph nodes to achieve better data locality, thus, improving the BFS performance. Our method is based on an observation that the sibling relationships will dominate the cache access pattern during the BFS traversal. Therefore, we propose a frequency-based model to construct the graph order. First, we optimize the graph order according to the nodes’ visit frequency. Nodes with high visit frequency will be processed in priority. Second, we try to maximize the child nodes overlap layer by layer. As it is proved to be NP-hard, we propose a heuristic method that could greatly reduce the preprocessing overheads. We conduct extensive experiments on 16 real-world datasets. The result shows that our method could achieve comparable performance with the state-of-the-art methods while the graph ordering overheads are only about 1/15.

Keywords: breadth-first search, BFS, graph ordering, graph algorithm

Procedia PDF Downloads 131
18868 Understanding the Linkages of Human Development and Fertility Change in Districts of Uttar Pradesh

Authors: Mamta Rajbhar, Sanjay K. Mohanty

Abstract:

India's progress in achieving replacement level of fertility is largely contingent on fertility reduction in the state of Uttar Pradesh as it accounts 17% of India's population with a low level of development. Though the TFR in the state has declined from 5.1 in 1991 to 3.4 by 2011, it conceals large differences in fertility level across districts. Using data from multiple sources this paper tests the hypothesis that the improvement in human development significantly reduces the fertility levels in districts of Uttar Pradesh. The unit of analyses is district, and fertility estimates are derived using the reverse survival method(RSM) while human development indices(HDI) are are estimated using uniform methodology adopted by UNDP for three period. The correlation and linear regression models are used to examine the relationship of fertility change and human development indices across districts. Result show the large variation and significant change in fertility level among the districts of Uttar Pradesh. During 1991-2011, eight districts had experienced a decline of TFR by 10-20%, 30 districts by 20-30% and 32 districts had experienced decline of more than 30%. On human development aspect, 17 districts recorded increase of more than 0.170 in HDI, 18 districts in the range of 0.150-0.170, 29 districts between 0.125-0.150 and six districts in the range of 0.1-0.125 during 1991-2011. Study shows significant negative relationship between HDI and TFR. HDI alone explains 70% variation in TFR. Also, the regression coefficient of TFR and HDI has become stronger over time; from -0.524 in 1991, -0.7477 by 2001 and -0.7181 by 2010. The regression analyses indicate that 0.1 point increase in HDI value will lead to 0.78 point decline in TFR. The HDI alone explains 70% variation in TFR. Improving the HDI will certainly reduce the fertility level in the districts.

Keywords: Fertility, HDI, Uttar Pradesh

Procedia PDF Downloads 243
18867 Oil Extraction from Sunflower Seed Using Green Solvent 2-Methyltetrahydrofuran and Isoamyl Alcohol

Authors: Sergio S. De Jesus, Aline Santana, Rubens Maciel Filho

Abstract:

The objective of this study was to choose and determine a green solvent system with similar extraction efficiencies as the traditional Bligh and Dyer method. Sunflower seed oil was extracted using Bligh and Dyer method with 2-methyltetrahydrofuran and isoamyl using alcohol ratios of 1:1; 2:1; 3:1; 1:2; 3:1. At the same time comparative experiments was performed with chloroform and methanol ratios of 1:1; 2:1; 3:1; 1:2; 3:1. Comparison study was done using 5 replicates (n=5). Statistical analysis was performed using Microsoft Office Excel (Microsoft, USA) to determine means and Tukey’s Honestly Significant Difference test for comparison between treatments (α = 0.05). The results showed that using classic method with methanol and chloroform presented the extraction oil yield with the values of 31-44% (w/w) and values of 36-45% (w/w) using green solvents for extractions. Among the two extraction methods, 2 methyltetrahydrofuran and isoamyl alcohol ratio 2:1 provided the best results (45% w/w), while the classic method using chloroform and methanol with ratio of 3:1 presented a extraction oil yield of 44% (w/w). It was concluded that the proposed extraction method using 2-methyltetrahydrofuran and isoamyl alcohol in this work allowed the same efficiency level as chloroform and methanol.

Keywords: extraction, green solvent, lipids, sugarcane

Procedia PDF Downloads 371
18866 Consequences of Youth Bulge in Pakistan

Authors: Muhammad Farooq, Muhammad Idrees

Abstract:

The present study has been designed to explore the causes and effects of Youth Bulge in Pakistan. However, youth bulge is a part of population segment which create problem for the whole society. The youth bulge is a common phenomenon in many developing countries, and in particular, in the least developed countries. It is often due to a stage of development where a country achieves success in reducing infant mortality but mothers still have a high fertility rate. The result is that a large share of the population is comprised of children and young adults, and today’s children are tomorrow’s young adults. Youth often play a prominent role in political violence and the existence of a “youth bulge” has been associated with times of political crisis. The population pyramid of Pakistan represents a large youth proportion and our government did not use that youth in positive way and did not provide them opportunity for development, this situation creates frustration in youth that leads them towards conflict, unrest and violence. This study will be focus on the opportunity and motives of the youth bulge situation in Pakistan in the lens of youth bulge theory. Moreover, it will give some suggestions to utilize youth in the development activities and avoid youth bulge situation in Pakistan. The present research was conducted in the metropolitan entities of Punjab, Pakistan. A sample of 300 respondents was taken from three randomly selected metropolitan entities (Faisalabad, Lahore and Rawalpindi) of Punjab Province of Pakistan. Information regarding demography, household, locality and other socio-cultural variables related to causes and effects of youth bulge in the state was collected through a well structured interview schedule. Mean, Standard Deviation and frequency distribution were used to check the measure of central tendency. Multiple linear regression was also applied to measure the influence of various independent variables on the response variable.

Keywords: youth bulge, violence, conflict, social unrest, crime, metropolitan entities, mean, standard deviation, multiple linear regression

Procedia PDF Downloads 452
18865 Deep Learning for SAR Images Restoration

Authors: Hossein Aghababaei, Sergio Vitale, Giampaolo Ferraioli

Abstract:

In the context of Synthetic Aperture Radar (SAR) data, polarization is an important source of information for Earth's surface monitoring. SAR Systems are often considered to transmit only one polarization. This constraint leads to either single or dual polarimetric SAR imaging modalities. Single polarimetric systems operate with a fixed single polarization of both transmitted and received electromagnetic (EM) waves, resulting in a single acquisition channel. Dual polarimetric systems, on the other hand, transmit in one fixed polarization and receive in two orthogonal polarizations, resulting in two acquisition channels. Dual polarimetric systems are obviously more informative than single polarimetric systems and are increasingly being used for a variety of remote sensing applications. In dual polarimetric systems, the choice of polarizations for the transmitter and the receiver is open. The choice of circular transmit polarization and coherent dual linear receive polarizations forms a special dual polarimetric system called hybrid polarimetry, which brings the properties of rotational invariance to geometrical orientations of features in the scene and optimizes the design of the radar in terms of reliability, mass, and power constraints. The complete characterization of target scattering, however, requires fully polarimetric data, which can be acquired with systems that transmit two orthogonal polarizations. This adds further complexity to data acquisition and shortens the coverage area or swath of fully polarimetric images compared to the swath of dual or hybrid polarimetric images. The search for solutions to augment dual polarimetric data to full polarimetric data will therefore take advantage of full characterization and exploitation of the backscattered field over a wider coverage with less system complexity. Several methods for reconstructing fully polarimetric images using hybrid polarimetric data can be found in the literature. Although the improvements achieved by the newly investigated and experimented reconstruction techniques are undeniable, the existing methods are, however, mostly based upon model assumptions (especially the assumption of reflectance symmetry), which may limit their reliability and applicability to vegetation and forest scenarios. To overcome the problems of these techniques, this paper proposes a new framework for reconstructing fully polarimetric information from hybrid polarimetric data. The framework uses Deep Learning solutions to augment hybrid polarimetric data without relying on model assumptions. A convolutional neural network (CNN) with a specific architecture and loss function is defined for this augmentation problem by focusing on different scattering properties of the polarimetric data. In particular, the method controls the CNN training process with respect to several characteristic features of polarimetric images defined by the combination of different terms in the cost or loss function. The proposed method is experimentally validated with real data sets and compared with a well-known and standard approach from the literature. From the experiments, the reconstruction performance of the proposed framework is superior to conventional reconstruction methods. The pseudo fully polarimetric data reconstructed by the proposed method also agree well with the actual fully polarimetric images acquired by radar systems, confirming the reliability and efficiency of the proposed method.

Keywords: SAR image, polarimetric SAR image, convolutional neural network, deep learnig, deep neural network

Procedia PDF Downloads 65
18864 A Character Detection Method for Ancient Yi Books Based on Connected Components and Regressive Character Segmentation

Authors: Xu Han, Shanxiong Chen, Shiyu Zhu, Xiaoyu Lin, Fujia Zhao, Dingwang Wang

Abstract:

Character detection is an important issue for character recognition of ancient Yi books. The accuracy of detection directly affects the recognition effect of ancient Yi books. Considering the complex layout, the lack of standard typesetting and the mixed arrangement between images and texts, we propose a character detection method for ancient Yi books based on connected components and regressive character segmentation. First, the scanned images of ancient Yi books are preprocessed with nonlocal mean filtering, and then a modified local adaptive threshold binarization algorithm is used to obtain the binary images to segment the foreground and background for the images. Second, the non-text areas are removed by the method based on connected components. Finally, the single character in the ancient Yi books is segmented by our method. The experimental results show that the method can effectively separate the text areas and non-text areas for ancient Yi books and achieve higher accuracy and recall rate in the experiment of character detection, and effectively solve the problem of character detection and segmentation in character recognition of ancient books.

Keywords: CCS concepts, computing methodologies, interest point, salient region detections, image segmentation

Procedia PDF Downloads 124
18863 Performance Based Seismic Retrofit of Masonry Infiled Reinforced Concrete Frames Using Passive Energy Dissipation Devices

Authors: Alok Madan, Arshad K. Hashmi

Abstract:

The paper presents a plastic analysis procedure based on the energy balance concept for performance based seismic retrofit of multi-story multi-bay masonry infilled reinforced concrete (R/C) frames with a ‘soft’ ground story using passive energy dissipation (PED) devices with the objective of achieving a target performance level of the retrofitted R/C frame for a given seismic hazard level at the building site. The proposed energy based plastic analysis procedure was employed for developing performance based design (PBD) formulations for PED devices for a simulated application in seismic retrofit of existing frame structures designed in compliance with the prevalent standard codes of practice. The PBD formulations developed for PED devices were implemented for simulated seismic retrofit of a representative code-compliant masonry infilled R/C frame with a ‘soft’ ground story using friction dampers as the PED device. Non-linear dynamic analyses of the retrofitted masonry infilled R/C frames is performed to investigate the efficacy and accuracy of the proposed energy based plastic analysis procedure in achieving the target performance level under design level earthquakes. Results of non-linear dynamic analyses demonstrate that the maximum inter-story drifts in the masonry infilled R/C frames with a ‘soft’ ground story that is retrofitted with the friction dampers designed using the proposed PBD formulations are controlled within the target drifts under near-field as well far-field earthquakes.

Keywords: energy methods, masonry infilled frame, near-field earthquakes, seismic protection, supplemental damping devices

Procedia PDF Downloads 293
18862 Investigate and Solving Analytically at Vibrational structures (In Arched Beam to Bridges) by New Method “AGM”

Authors: M. R. Akbari, P. Soleimani, R. Khalili, Sara Akbari

Abstract:

Analyzing and modeling the vibrational behavior of arched bridges during the earthquake in order to decrease the exerted damages to the structure is a very hard task to do. This item has been done analytically in the present paper for the first time. Due to the importance of building arched bridges as a great structure in the human being civilization and its specifications such as transferring vertical loads to its arcs and the lack of bending moments and shearing forces, this case study is devoted to this special issue. Here, the nonlinear vibration of arched bridges has been modeled and simulated by an arched beam with harmonic vertical loads and its behavior has been investigated by analyzing a nonlinear partial differential equation governing the system. It is notable that the procedure has been done analytically by AGM (Akbari, Ganji Method). Furthermore, comparisons have been made between the obtained results by numerical Method (rkf-45) and AGM in order to assess the scientific validity.

Keywords: new method (AGM), arched beam bridges, angular frequency, harmonic loads

Procedia PDF Downloads 293
18861 An Accelerated Stochastic Gradient Method with Momentum

Authors: Liang Liu, Xiaopeng Luo

Abstract:

In this paper, we propose an accelerated stochastic gradient method with momentum. The momentum term is the weighted average of generated gradients, and the weights decay inverse proportionally with the iteration times. Stochastic gradient descent with momentum (SGDM) uses weights that decay exponentially with the iteration times to generate the momentum term. Using exponential decay weights, variants of SGDM with inexplicable and complicated formats have been proposed to achieve better performance. However, the momentum update rules of our method are as simple as that of SGDM. We provide theoretical convergence analyses, which show both the exponential decay weights and our inverse proportional decay weights can limit the variance of the parameter moving directly to a region. Experimental results show that our method works well with many practical problems and outperforms SGDM.

Keywords: exponential decay rate weight, gradient descent, inverse proportional decay rate weight, momentum

Procedia PDF Downloads 160
18860 Nonuniformity Correction Technique in Infrared Video Using Feedback Recursive Least Square Algorithm

Authors: Flavio O. Torres, Maria J. Castilla, Rodrigo A. Augsburger, Pedro I. Cachana, Katherine S. Reyes

Abstract:

In this paper, we present a scene-based nonuniformity correction method using a modified recursive least square algorithm with a feedback system on the updates. The feedback is designed to remove impulsive noise contamination images produced by a recursive least square algorithm by measuring the output of the proposed algorithm. The key advantage of the method is based on its capacity to estimate detectors parameters and then compensate for impulsive noise contamination image in a frame by frame basics. We define the algorithm and present several experimental results to demonstrate the efficacy of the proposed method in comparison to several previously published recursive least square-based methods. We show that the proposed method removes impulsive noise contamination image.

Keywords: infrared focal plane arrays, infrared imaging, least mean square, nonuniformity correction

Procedia PDF Downloads 138
18859 A Compact Standing-Wave Thermoacoustic Refrigerator Driven by a Rotary Drive Mechanism

Authors: Kareem Abdelwahed, Ahmed Salama, Ahmed Rabie, Ahmed Hamdy, Waleed Abdelfattah, Ahmed Abd El-Rahman

Abstract:

Conventional vapor-compression refrigeration systems rely on typical refrigerants, such as CFC, HCFC and ammonia. Despite of their suitable thermodynamic properties and their stability in the atmosphere, their corresponding global warming potential and ozone depletion potential raise concerns about their usage. Thus, the need for new refrigeration systems, which are environment-friendly, inexpensive and simple in construction, has strongly motivated the development of thermoacoustic energy conversion systems. A thermoacoustic refrigerator (TAR) is a device that is mainly consisting of a resonator, a stack and two heat exchangers. Typically, the resonator is a long circular tube, made of copper or steel and filled with Helium as a the working gas, while the stack has short and relatively low thermal conductivity ceramic parallel plates aligned with the direction of the prevailing resonant wave. Typically, the resonator of a standing-wave refrigerator has one end closed and is bounded by the acoustic driver at the other end enabling the propagation of half-wavelength acoustic excitation. The hot and cold heat exchangers are made of copper to allow for efficient heat transfer between the working gas and the external heat source and sink respectively. TARs are interesting because they have no moving parts, unlike conventional refrigerators, and almost no environmental impact exists as they rely on the conversion of acoustic and heat energies. Their fabrication process is rather simpler and sizes span wide variety of length scales. The viscous and thermal interactions between the stack plates, heat exchangers' plates and the working gas significantly affect the flow field within the plates' channels, and the energy flux density at the plates' surfaces, respectively. Here, the design, the manufacture and the testing of a compact refrigeration system that is based on the thermoacoustic energy-conversion technology is reported. A 1-D linear acoustic model is carefully and specifically developed, which is followed by building the hardware and testing procedures. The system consists of two harmonically-oscillating pistons driven by a simple 1-HP rotary drive mechanism operating at a frequency of 42Hz -hereby, replacing typical expensive linear motors and loudspeakers-, and a thermoacoustic stack within which the energy conversion of sound into heat is taken place. Air at ambient conditions is used as the working gas while the amplitude of the driver's displacement reaches 19 mm. The 30-cm-long stack is a simple porous ceramic material having 100 square channels per square inch. During operation, both oscillating-gas pressure and solid-stack temperature are recorded for further analysis. Measurements show a maximum temperature difference of about 27 degrees between the stack hot and cold ends with a Carnot coefficient of performance of 11 and estimated cooling capacity of five Watts, when operating at ambient conditions. A dynamic pressure of 7-kPa-amplitude is recorded, yielding a drive ratio of 7% approximately, and found in a good agreement with theoretical prediction. The system behavior is clearly non-linear and significant non-linear loss mechanisms are evident. This work helps understanding the operation principles of thermoacoustic refrigerators and presents a keystone towards developing commercial thermoacoustic refrigerator units.

Keywords: refrigeration system, rotary drive mechanism, standing-wave, thermoacoustic refrigerator

Procedia PDF Downloads 365
18858 Failure Simulation of Small-scale Walls with Chases Using the Lattic Discrete Element Method

Authors: Karina C. Azzolin, Luis E. Kosteski, Alisson S. Milani, Raquel C. Zydeck

Abstract:

This work aims to represent Numerically tests experimentally developed in reduced scale walls with horizontal and inclined cuts by using the Lattice Discrete Element Method (LDEM) implemented On de Abaqus/explicit environment. The cuts were performed with depths of 20%, 30%, and 50% On the walls subjected to centered and eccentric loading. The parameters used to evaluate the numerical model are its strength, the failure mode, and the in-plane and out-of-plane displacements.

Keywords: structural masonry, wall chases, small scale, numerical model, lattice discrete element method

Procedia PDF Downloads 171
18857 Effect of Experience on Evacuation of Mice in Emergency Conditions

Authors: Teng Zhang, Shenshi Huang, Gang Xu, Xuelin Zhang, Shouxiang Lu

Abstract:

With the acceleration of urbanization and the increasing of the population in the city, the evacuation of pedestrians suffering from disaster environments such as fire in a room or other limited space becomes a vital issue in modern society. Mice have been used in experimental crowd evacuation in recent years for its good similarities to human in physical structure and stress reaction. In this study, the effect of experience or memory on the collective behavior of mice was explored. To help mice familiarize themselves with the design of the space and the stimulus caused by smoke, we trained them repeatedly for 2 days so that they can escape from the emergency conditions as soon as possible. The escape pattern, trajectories, walking speed, turning angle and mean individual escape time of mice in each training trail were analyzed. We found that mice can build memory quickly after the first trial on the first day. On the second day, the evacuation of mice was maintained in a stable and efficient state. Meanwhile, the group with size of 30 (G30) had a shorter mean individual escape time compared with G12. Furthermore, we tested the experience of evacuation skill of mice after several days. The results showed that the mice can hold the experience or memory over 3 weeks. We proposed the importance of experience of evacuation skill and the research of training methods in experimental evacuation of mice. The results can deepen our understanding of collective behavior of mice and conduce to the establishment of animal models in the study of pedestrian crowd dynamics in emergency conditions.

Keywords: experience, evacuation, mice, group size, behavior

Procedia PDF Downloads 264