Search results for: improved Canny algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7823

Search results for: improved Canny algorithm

5963 A Statistical-Algorithmic Approach for the Design and Evaluation of a Fresnel Solar Concentrator-Receiver System

Authors: Hassan Qandil

Abstract:

Using a statistical algorithm incorporated in MATLAB, four types of non-imaging Fresnel lenses are designed; spot-flat, linear-flat, dome-shaped and semi-cylindrical-shaped. The optimization employs a statistical ray-tracing methodology of the incident light, mainly considering effects of chromatic aberration, varying focal lengths, solar inclination and azimuth angles, lens and receiver apertures, and the optimum number of prism grooves. While adopting an equal-groove-width assumption of the Poly-methyl-methacrylate (PMMA) prisms, the main target is to maximize the ray intensity on the receiver’s aperture and therefore achieving higher values of heat flux. The algorithm outputs prism angles and 2D sketches. 3D drawings are then generated via AutoCAD and linked to COMSOL Multiphysics software to simulate the lenses under solar ray conditions, which provides optical and thermal analysis at both the lens’ and the receiver’s apertures while setting conditions as per the Dallas-TX weather data. Once the lenses’ characterization is finalized, receivers are designed based on its optimized aperture size. Several cavity shapes; including triangular, arc-shaped and trapezoidal, are tested while coupled with a variety of receiver materials, working fluids, heat transfer mechanisms, and enclosure designs. A vacuum-reflective enclosure is also simulated for an enhanced thermal absorption efficiency. Each receiver type is simulated via COMSOL while coupled with the optimized lens. A lab-scale prototype for the optimum lens-receiver configuration is then fabricated for experimental evaluation. Application-based testing is also performed for the selected configuration, including that of a photovoltaic-thermal cogeneration system and solar furnace system. Finally, some future research work is pointed out, including the coupling of the collector-receiver system with an end-user power generator, and the use of a multi-layered genetic algorithm for comparative studies.

Keywords: COMSOL, concentrator, energy, fresnel, optics, renewable, solar

Procedia PDF Downloads 148
5962 ORR Electrocatalyst for Batteries and Fuel Cells Development with SIO₂/Carbon Black Based Composite Nanomaterials

Authors: Maryam Kiani

Abstract:

This study focuses on the development of composite nanomaterials based on SiO₂ and carbon black for oxygen reduction reaction (ORR) electrocatalysts in batteries and fuel cells. The aim was to explore the potential of these composite materials as efficient catalysts for ORR, which is a critical process in energy conversion devices. The SiO₂/carbon black composite nanomaterials were synthesized using a facile and scalable method. The morphology, structure, and electrochemical properties of the materials were characterized using various techniques including scanning electron microscopy (SEM), X-ray diffraction (XRD), and electrochemical measurements. The results demonstrated that the incorporation of SiO₂ into the carbon black matrix enhanced the ORR performance of the composite material. The composite nanomaterials exhibited improved electrocatalytic activity, enhanced stability, and increased durability compared to pure carbon black. The presence of SiO₂ facilitated the formation of active sites, improved electron transfer, and increased the surface area available for ORR. This study contributes to the advancement of battery and fuel cell technology by offering a promising approach for the development of high-performance ORR electrocatalysts. The SiO₂/carbon black composite nanomaterials show great potential for improving the efficiency and durability of energy conversion devices, leading to more sustainable and efficient energy solutions.

Keywords: ORR, fuel cells, batteries, electrocatalyst

Procedia PDF Downloads 100
5961 Graphene Reinforced Magnesium Metal Matrix Composites for Biomedical Applications

Authors: Khurram Munir, Cuie Wen, Yuncang Li

Abstract:

Magnesium (Mg) metal matrix composites (MMCs) reinforced with graphene nanoplatelets (GNPs) have been developed by powder metallurgy (PM). In this study, GNPs with different concentrations (0.1-0.3 wt.%) were dispersed into Mg powders by high-energy ball-milling processes. The microstructure and resultant mechanical properties of the fabricated nanocomposites were characterized using transmission electron microscopy (TEM), scanning electron microscopy (SEM), energy dispersive X-ray spectroscopy (EDX), X-ray diffraction (XRD), Raman spectroscopy (RS), compression and nano-wear tests. The corrosion resistance of the fabricated composites was evaluated by electrochemical tests and hydrogen evolution measurements. Finally, the biological response of Mg-GNPs composites was assessed using osteoblast-like SaOS2 cells. The results indicate that GNPs are excellent candidates as reinforcements in Mg matrices for the manufacture of biodegradable Mg-based composite implants. GNP addition improved the mechanical properties of Mg via synergetic strengthening modes. Moreover, retaining the structural integrity of GNPs during PM processing improved the ductility, compressive strength, and corrosion resistance of the Mg-GNP composites as compared to monolithic Mg. Cytotoxicity assessments did not reveal any significant toxicity with the addition of GNPs to Mg matrices. This study demonstrates that Mg-xGNPs with x < 0.3 wt.%, may constitute novel biodegradable implant materials for load-bearing applications.

Keywords: magnesium-graphene composites, strengthening mechanisms, In vitro cytotoxicity, biocorrosion

Procedia PDF Downloads 153
5960 Accelerated Structural Reliability Analysis under Earthquake-Induced Tsunamis by Advanced Stochastic Simulation

Authors: Sai Hung Cheung, Zhe Shao

Abstract:

Recent earthquake-induced tsunamis in Padang, 2004 and Tohoku, 2011 brought huge losses of lives and properties. Maintaining vertical evacuation systems is the most crucial strategy to effectively reduce casualty during the tsunami event. Thus, it is of our great interest to quantify the risk to structural dynamic systems due to earthquake-induced tsunamis. Despite continuous advancement in computational simulation of the tsunami and wave-structure interaction modeling, it still remains computationally challenging to evaluate the reliability (or its complement failure probability) of a structural dynamic system when uncertainties related to the system and its modeling are taken into account. The failure of the structure in a tsunami-wave-structural system is defined as any response quantities of the system exceeding specified thresholds during the time when the structure is subjected to dynamic wave impact due to earthquake-induced tsunamis. In this paper, an approach based on a novel integration of the Subset Simulation algorithm and a recently proposed moving least squares response surface approach for stochastic sampling is proposed. The effectiveness of the proposed approach is discussed by comparing its results with those obtained from the Subset Simulation algorithm without using the response surface approach.

Keywords: response surface model, subset simulation, structural reliability, Tsunami risk

Procedia PDF Downloads 373
5959 Modeling Continuous Flow in a Curved Channel Using Smoothed Particle Hydrodynamics

Authors: Indri Mahadiraka Rumamby, R. R. Dwinanti Rika Marthanty, Jessica Sjah

Abstract:

Smoothed particle hydrodynamics (SPH) was originally created to simulate nonaxisymmetric phenomena in astrophysics. However, this method still has several shortcomings, namely the high computational cost required to model values with high resolution and problems with boundary conditions. The difficulty of modeling boundary conditions occurs because the SPH method is influenced by particle deficiency due to the integral of the kernel function being truncated by boundary conditions. This research aims to answer if SPH modeling with a focus on boundary layer interactions and continuous flow can produce quantifiably accurate values with low computational cost. This research will combine algorithms and coding in the main program of meandering river, continuous flow algorithm, and solid-fluid algorithm with the aim of obtaining quantitatively accurate results on solid-fluid interactions with the continuous flow on a meandering channel using the SPH method. This study uses the Fortran programming language for modeling the SPH (Smoothed Particle Hydrodynamics) numerical method; the model is conducted in the form of a U-shaped meandering open channel in 3D, where the channel walls are soil particles and uses a continuous flow with a limited number of particles.

Keywords: smoothed particle hydrodynamics, computational fluid dynamics, numerical simulation, fluid mechanics

Procedia PDF Downloads 120
5958 Remote Assessment and Change Detection of GreenLAI of Cotton Crop Using Different Vegetation Indices

Authors: Ganesh B. Shinde, Vijaya B. Musande

Abstract:

Cotton crop identification based on the timely information has significant advantage to the different implications of food, economic and environment. Due to the significant advantages, the accurate detection of cotton crop regions using supervised learning procedure is challenging problem in remote sensing. Here, classifiers on the direct image are played a major role but the results are not much satisfactorily. In order to further improve the effectiveness, variety of vegetation indices are proposed in the literature. But, recently, the major challenge is to find the better vegetation indices for the cotton crop identification through the proposed methodology. Accordingly, fuzzy c-means clustering is combined with neural network algorithm, trained by Levenberg-Marquardt for cotton crop classification. To experiment the proposed method, five LISS-III satellite images was taken and the experimentation was done with six vegetation indices such as Simple Ratio, Normalized Difference Vegetation Index, Enhanced Vegetation Index, Green Atmospherically Resistant Vegetation Index, Wide-Dynamic Range Vegetation Index, Green Chlorophyll Index. Along with these indices, Green Leaf Area Index is also considered for investigation. From the research outcome, Green Atmospherically Resistant Vegetation Index outperformed with all other indices by reaching the average accuracy value of 95.21%.

Keywords: Fuzzy C-Means clustering (FCM), neural network, Levenberg-Marquardt (LM) algorithm, vegetation indices

Procedia PDF Downloads 309
5957 Calibration and Validation of the Aquacrop Model for Simulating Growth and Yield of Rain-Fed Sesame (Sesamum Indicum L.) Under Different Soil Fertility Levels in the Semi-arid Areas of Tigray, Ethiopia

Authors: Abadi Berhane, Walelign Worku, Berhanu Abrha, Gebre Hadgu

Abstract:

Sesame is an important oilseed crop in Ethiopia, which is the second most exported agricultural commodity next to coffee. However, there is poor soil fertility management and a research-led farming system for the crop. The AquaCrop model was applied as a decision-support tool, which performs a semi-quantitative approach to simulate the yield of crops under different soil fertility levels. The objective of this experiment was to calibrate and validate the AquaCrop model for simulating the growth and yield of sesame under different nitrogen fertilizer levels and to test the performance of the model as a decision-support tool for improved sesame cultivation in the study area. The experiment was laid out as a randomized complete block design (RCBD) in a factorial arrangement in the 2016, 2017, and 2018 main cropping seasons. In this experiment, four nitrogen fertilizer rates, 0, 23, 46, and 69 Kg/ha nitrogen, and three improved varieties (Setit-1, Setit-2, and Humera-1). In the meantime, growth, yield, and yield components of sesame were collected from each treatment. Coefficient of determination (R2), Root mean square error (RMSE), Normalized root mean square error (N-RMSE), Model efficiency (E), and Degree of agreement (D) were used to test the performance of the model. The results indicated that the AquaCrop model successfully simulated soil water content with R2 varying from 0.92 to 0.98, RMSE 6.5 to 13.9 mm, E 0.78 to 0.94, and D 0.95 to 0.99, and the corresponding values for AB also varied from 0.92 to 0.98, 0.33 to 0.54 tons/ha, 0.74 to 0.93, and 0.9 to 0.98, respectively. The results on the canopy cover of sesame also showed that the model acceptably simulated canopy cover with R2 varying from 0.95 to 0.99 and a RMSE of 5.3 to 8.6%. The AquaCrop model was appropriately calibrated to simulate soil water content, canopy cover, aboveground biomass, and sesame yield; the results indicated that the model adequately simulated the growth and yield of sesame under the different nitrogen fertilizer levels. The AquaCrop model might be an important tool for improved soil fertility management and yield enhancement strategies of sesame. Hence, the model might be applied as a decision-support tool in soil fertility management in sesame production.

Keywords: aquacrop model, normalized water productivity, nitrogen fertilizer, canopy cover, sesame

Procedia PDF Downloads 67
5956 Liposomal Encapsulation of Silver Nanoparticle for Improved Delivery and Enhanced Anticancer Properties

Authors: Azeez Yusuf, Alan Casey

Abstract:

Silver nanoparticles (AgNP) are one of the most widely investigated metallic nanoparticles due to their promising antibacterial activities. In recent years, AgNP research has shifted beyond antimicrobial use to potential applications in the medical arena. This shift coupled with the extensive commercial applications of AgNP will further increase human exposure, and the subsequent risk of adverse effects that may result from repeated exposures and inefficient delivery meaning research into improved AgNP delivery is of paramount importance. In this study, AgNP were encapsulated in a natural bio-surfactant, dipalmitoylphosphatyidyl choline (DPPC), in an attempt to enhance the intracellular delivery and simultaneously mediate the associated cytotoxicity of the AgNP. It was noted that as a result of the encapsulation, liposomal-AgNP (Lipo-AgNP) at 0.625 μg/ml induced significant cell death in THP1 cell lines a notably lower dose than that of the uncoated AgNP induced cytotoxicity. The induced cytotoxicity was shown to result in an increased level of DNA fragmentation resulting in a cell cycle interruption at the S phase of the cell cycle. It was shown that the predominate form of cell death upon exposure to both uncoated and Lipo-AgNP was apoptosis, however, a ROS-independent activation of the executioner caspases 3/7 occurred when exposed to the Lipo-AgNP. These findings showed that encapsulation of AgNP enhances AgNP cytotoxicity and mediates an ROS-independent induction of apoptosis.

Keywords: silver nanoparticles, AgNP, cytotoxicity, encapsulation, liposome

Procedia PDF Downloads 148
5955 Evolving Credit Scoring Models using Genetic Programming and Language Integrated Query Expression Trees

Authors: Alexandru-Ion Marinescu

Abstract:

There exist a plethora of methods in the scientific literature which tackle the well-established task of credit score evaluation. In its most abstract form, a credit scoring algorithm takes as input several credit applicant properties, such as age, marital status, employment status, loan duration, etc. and must output a binary response variable (i.e. “GOOD” or “BAD”) stating whether the client is susceptible to payment return delays. Data imbalance is a common occurrence among financial institution databases, with the majority being classified as “GOOD” clients (clients that respect the loan return calendar) alongside a small percentage of “BAD” clients. But it is the “BAD” clients we are interested in since accurately predicting their behavior is crucial in preventing unwanted loss for loan providers. We add to this whole context the constraint that the algorithm must yield an actual, tractable mathematical formula, which is friendlier towards financial analysts. To this end, we have turned to genetic algorithms and genetic programming, aiming to evolve actual mathematical expressions using specially tailored mutation and crossover operators. As far as data representation is concerned, we employ a very flexible mechanism – LINQ expression trees, readily available in the C# programming language, enabling us to construct executable pieces of code at runtime. As the title implies, they model trees, with intermediate nodes being operators (addition, subtraction, multiplication, division) or mathematical functions (sin, cos, abs, round, etc.) and leaf nodes storing either constants or variables. There is a one-to-one correspondence between the client properties and the formula variables. The mutation and crossover operators work on a flattened version of the tree, obtained via a pre-order traversal. A consequence of our chosen technique is that we can identify and discard client properties which do not take part in the final score evaluation, effectively acting as a dimensionality reduction scheme. We compare ourselves with state of the art approaches, such as support vector machines, Bayesian networks, and extreme learning machines, to name a few. The data sets we benchmark against amount to a total of 8, of which we mention the well-known Australian credit and German credit data sets, and the performance indicators are the following: percentage correctly classified, area under curve, partial Gini index, H-measure, Brier score and Kolmogorov-Smirnov statistic, respectively. Finally, we obtain encouraging results, which, although placing us in the lower half of the hierarchy, drive us to further refine the algorithm.

Keywords: expression trees, financial credit scoring, genetic algorithm, genetic programming, symbolic evolution

Procedia PDF Downloads 111
5954 Lip Localization Technique for Myanmar Consonants Recognition Based on Lip Movements

Authors: Thein Thein, Kalyar Myo San

Abstract:

Lip reading system is one of the different supportive technologies for hearing impaired, or elderly people or non-native speakers. For normal hearing persons in noisy environments or in conditions where the audio signal is not available, lip reading techniques can be used to increase their understanding of spoken language. Hearing impaired persons have used lip reading techniques as important tools to find out what was said by other people without hearing voice. Thus, visual speech information is important and become active research area. Using visual information from lip movements can improve the accuracy and robustness of a speech recognition system and the need for lip reading system is ever increasing for every language. However, the recognition of lip movement is a difficult task because of the region of interest (ROI) is nonlinear and noisy. Therefore, this paper proposes method to detect the accurate lips shape and to localize lip movement towards automatic lip tracking by using the combination of Otsu global thresholding technique and Moore Neighborhood Tracing Algorithm. Proposed method shows how accurate lip localization and tracking which is useful for speech recognition. In this work of study and experiments will be carried out the automatic lip localizing the lip shape for Myanmar consonants using the only visual information from lip movements which is useful for visual speech of Myanmar languages.

Keywords: lip reading, lip localization, lip tracking, Moore neighborhood tracing algorithm

Procedia PDF Downloads 348
5953 Effect of Surface Preparation of Concrete Substrate on Bond Tensile Strength of Thin Bonded Cement Based Overlays

Authors: S. Asad Ali Gillani, Ahmed Toumi, Anaclet Turatsinze

Abstract:

After a certain period of time, the degradation of concrete structures is unavoidable. For large concrete areas, thin bonded cement-based overlay is a suitable rehabilitation technique. Previous research demonstrated that durability of bonded cement-based repairs is always a problem and one of its main reasons is deboning at interface. Since durability and efficiency of any repair system mainly depend upon the bond between concrete substrate and repair material, the bond between concrete substrate and repair material can be improved by increasing the surface roughness. The surface roughness can be improved by performing surface treatment of the concrete substrate to enhance mechanical interlocking which is one of the basic mechanisms of adhesion between two surfaces. In this research, bond tensile strength of cement-based overlays having substrate surface prepared using different techniques has been characterized. In first step cement based substrate was prepared and then cured for three months. After curing two different types of the surface treatments were performed on this substrate; cutting and sandblasting. In second step overlay was cast on these prepared surfaces, which were cut and sandblasted surfaces. The overlay was also cast on the surface without any treatment. Finally, bond tensile strength of cement-based overlays was evaluated in direct tension test and the results are discussed in this paper.

Keywords: concrete substrate, surface preparation, overlays, bond tensile strength

Procedia PDF Downloads 452
5952 Delineation of Green Infrastructure Buffer Areas with a Simulated Annealing: Consideration of Ecosystem Services Trade-Offs in the Objective Function

Authors: Andres Manuel Garcia Lamparte, Rocio Losada Iglesias, Marcos BoullóN Magan, David Miranda Barros

Abstract:

The biodiversity strategy of the European Union for 2030, mentions climate change as one of the key factors for biodiversity loss and considers green infrastructure as one of the solutions to this problem. In this line, the European Commission has developed a green infrastructure strategy which commits members states to consider green infrastructure in their territorial planning. This green infrastructure is aimed at granting the provision of a wide number of ecosystem services to support biodiversity and human well-being by countering the effects of climate change. Yet, there are not too many tools available to delimit green infrastructure. The available ones consider the potential of the territory to provide ecosystem services. However, these methods usually aggregate several maps of ecosystem services potential without considering possible trade-offs. This can lead to excluding areas with a high potential for providing ecosystem services which have many trade-offs with other ecosystem services. In order to tackle this problem, a methodology is proposed to consider ecosystem services trade-offs in the objective function of a simulated annealing algorithm aimed at delimiting green infrastructure multifunctional buffer areas. To this end, the provision potential maps of the regulating ecosystem services considered to delimit the multifunctional buffer areas are clustered in groups, so that ecosystem services that create trade-offs are excluded in each group. The normalized provision potential maps of the ecosystem services in each group are added to obtain a potential map per group which is normalized again. Then the potential maps for each group are combined in a raster map that shows the highest provision potential value in each cell. The combined map is then used in the objective function of the simulated annealing algorithm. The algorithm is run both using the proposed methodology and considering the ecosystem services individually. The results are analyzed with spatial statistics and landscape metrics to check the number of ecosystem services that the delimited areas produce, as well as their regularity and compactness. It has been observed that the proposed methodology increases the number of ecosystem services produced by delimited areas, improving their multifunctionality and increasing their effectiveness in preventing climate change impacts.

Keywords: ecosystem services trade-offs, green infrastructure delineation, multifunctional buffer areas, climate change

Procedia PDF Downloads 168
5951 Humans Trust Building in Robots with the Help of Explanations

Authors: Misbah Javaid, Vladimir Estivill-Castro, Rene Hexel

Abstract:

The field of robotics is advancing rapidly to the point where robots have become an integral part of the modern society. These robots collaborate and contribute productively with humans and compensate some shortcomings from human abilities and complement them with their skills. Effective teamwork of humans and robots demands to investigate the critical issue of trust. The field of human-computer interaction (HCI) has already examined trust humans place in technical systems mostly on issues like reliability and accuracy of performance. Early work in the area of expert systems suggested that automatic generation of explanations improved trust and acceptability of these systems. In this work, we augmented a robot with the user-invoked explanation generation proficiency. To measure explanations effect on human’s level of trust, we collected subjective survey measures and behavioral data in a human-robot team task into an interactive, adversarial and partial information environment. The results showed that with the explanation capability humans not only understand and recognize robot as an expert team partner. But, it was also observed that human's learning and human-robot team performance also significantly improved because of the meaningful interaction with the robot in the human-robot team. Moreover, by observing distinctive outcomes, we expect our research outcomes will also provide insights into further improvement of human-robot trustworthy relationships.

Keywords: explanation interface, adversaries, partial observability, trust building

Procedia PDF Downloads 195
5950 Estimation of a Finite Population Mean under Random Non Response Using Improved Nadaraya and Watson Kernel Weights

Authors: Nelson Bii, Christopher Ouma, John Odhiambo

Abstract:

Non-response is a potential source of errors in sample surveys. It introduces bias and large variance in the estimation of finite population parameters. Regression models have been recognized as one of the techniques of reducing bias and variance due to random non-response using auxiliary data. In this study, it is assumed that random non-response occurs in the survey variable in the second stage of cluster sampling, assuming full auxiliary information is available throughout. Auxiliary information is used at the estimation stage via a regression model to address the problem of random non-response. In particular, the auxiliary information is used via an improved Nadaraya-Watson kernel regression technique to compensate for random non-response. The asymptotic bias and mean squared error of the estimator proposed are derived. Besides, a simulation study conducted indicates that the proposed estimator has smaller values of the bias and smaller mean squared error values compared to existing estimators of finite population mean. The proposed estimator is also shown to have tighter confidence interval lengths at a 95% coverage rate. The results obtained in this study are useful, for instance, in choosing efficient estimators of the finite population mean in demographic sample surveys.

Keywords: mean squared error, random non-response, two-stage cluster sampling, confidence interval lengths

Procedia PDF Downloads 130
5949 Random Forest Classification for Population Segmentation

Authors: Regina Chua

Abstract:

To reduce the costs of re-fielding a large survey, a Random Forest classifier was applied to measure the accuracy of classifying individuals into their assigned segments with the fewest possible questions. Given a long survey, one needed to determine the most predictive ten or fewer questions that would accurately assign new individuals to custom segments. Furthermore, the solution needed to be quick in its classification and usable in non-Python environments. In this paper, a supervised Random Forest classifier was modeled on a dataset with 7,000 individuals, 60 questions, and 254 features. The Random Forest consisted of an iterative collection of individual decision trees that result in a predicted segment with robust precision and recall scores compared to a single tree. A random 70-30 stratified sampling for training the algorithm was used, and accuracy trade-offs at different depths for each segment were identified. Ultimately, the Random Forest classifier performed at 87% accuracy at a depth of 10 with 20 instead of 254 features and 10 instead of 60 questions. With an acceptable accuracy in prioritizing feature selection, new tools were developed for non-Python environments: a worksheet with a formulaic version of the algorithm and an embedded function to predict the segment of an individual in real-time. Random Forest was determined to be an optimal classification model by its feature selection, performance, processing speed, and flexible application in other environments.

Keywords: machine learning, supervised learning, data science, random forest, classification, prediction, predictive modeling

Procedia PDF Downloads 88
5948 Classification of Land Cover Usage from Satellite Images Using Deep Learning Algorithms

Authors: Shaik Ayesha Fathima, Shaik Noor Jahan, Duvvada Rajeswara Rao

Abstract:

Earth's environment and its evolution can be seen through satellite images in near real-time. Through satellite imagery, remote sensing data provide crucial information that can be used for a variety of applications, including image fusion, change detection, land cover classification, agriculture, mining, disaster mitigation, and monitoring climate change. The objective of this project is to propose a method for classifying satellite images according to multiple predefined land cover classes. The proposed approach involves collecting data in image format. The data is then pre-processed using data pre-processing techniques. The processed data is fed into the proposed algorithm and the obtained result is analyzed. Some of the algorithms used in satellite imagery classification are U-Net, Random Forest, Deep Labv3, CNN, ANN, Resnet etc. In this project, we are using the DeepLabv3 (Atrous convolution) algorithm for land cover classification. The dataset used is the deep globe land cover classification dataset. DeepLabv3 is a semantic segmentation system that uses atrous convolution to capture multi-scale context by adopting multiple atrous rates in cascade or in parallel to determine the scale of segments.

Keywords: area calculation, atrous convolution, deep globe land cover classification, deepLabv3, land cover classification, resnet 50

Procedia PDF Downloads 135
5947 An Improved Multiple Scattering Reflectance Model Based on Specular V-Cavity

Authors: Hongbin Yang, Mingxue Liao, Changwen Zheng, Mengyao Kong, Chaohui Liu

Abstract:

Microfacet-based reflection models are widely used to model light reflections for rough surfaces. Microfacet models have become the standard surface material building block for describing specular components with varying roughness; and yet, while they possess many desirable properties as well as produce convincing results, their design ignores important sources of scattering, which can cause a significant loss of energy. Specifically, they only simulate the single scattering on the microfacets and ignore the subsequent interactions. As the roughness increases, the interaction will become more and more important. So a multiple-scattering microfacet model based on specular V-cavity is presented for this important open problem. However, it spends much unnecessary rendering time because of setting the same number of scatterings for different roughness surfaces. In this paper, we design a geometric attenuation term G to compute the BRDF (Bidirectional reflection distribution function) of multiple scattering of rough surfaces. Moreover, we consider determining the number of scattering by deterministic heuristics for different roughness surfaces. As a result, our model produces a similar appearance of the objects with the state of the art model with significantly improved rendering efficiency. Finally, we derive a multiple scattering BRDF based on the original microfacet framework.

Keywords: bidirectional reflection distribution function, BRDF, geometric attenuation term, multiple scattering, V-cavity model

Procedia PDF Downloads 109
5946 Effect of Damper Combinations in Series or Parallel on Structural Response

Authors: Ajay Kumar Sinha, Sharad Singh, Anukriti Sinha

Abstract:

Passive energy dissipation method for earthquake protection of structures is undergoing developments for improved performance. Combined use of different types of damping mechanisms has shown positive results in the near past. Different supplemental damping methods like viscous damping, frictional damping and metallic damping are being combined together for optimum performance. The conventional method of connecting passive dampers to structures is a parallel connection between the damper unit and structural member. Researchers are investigating coupling effect of different types of dampers. The most popular choice among the research community is coupling of viscous dampers and frictional dampers. The series and parallel coupling of these damping units are being studied for relative performance of the coupled system on response control of structures against earthquake. In this paper an attempt has been made to couple Fluid Viscous Dampers and Frictional Dampers in series and parallel to form a single unit of damping system. The relative performance of the coupled units has been studied on three dimensional reinforced concrete framed structure. The current theories of structural dynamics in practice for viscous damping and frictional damping have been incorporated in this study. The time history analysis of the structural system with coupled damper units, uncoupled damper units as well as of structural system without any supplemental damping has been performed in this study. The investigations reported in this study show significant improved performance of coupled system. A higher natural frequency of the system outside the forcing frequency has been obtained for structural systems with coupled damper units as against the other cases. The structural response of the structure in terms of storey displacement and storey drift show significant improvement for the case with coupled damper units as against the cases with uncoupled units or without any supplemental damping. The results are promising in terms of improved response of the structure with coupled damper units. Further investigations in this regard for a comparative performance of the series and parallel coupled systems will be carried out to study the optimum behavior of these coupled systems for enhanced response control of structural systems.

Keywords: frictional damping, parallel coupling, response control, series coupling, supplemental damping, viscous damping

Procedia PDF Downloads 445
5945 An Investigation on the Effect of Window Tinting on Thermal Comfort inside Office Buildings

Authors: S. El-Azzeh, A. Al-Aqqad, M. Salem, H. Al-Khaldi, S. Thaher

Abstract:

Thermal comfort studies are very important during the early stages of the building’s design. If this study was ignored, problems will start to occur for the occupants in the future. In hot climates, where solar radiations are entering buildings all year long, occupant’s thermal comfort in office buildings needs to be examined. This study aims to investigate the thermal comfort at an existing office building at the Australian College of Kuwait and test its validity and improve occupant’s thermal satisfaction by covering windows with a heat rejection tint material that enables sunlight to pass through the office while reflecting solar heat outside. Environmental variables were measured using thermal comfort data logger INNOVA 1221 to find the predicted mean vote (PMV) in the selected location. Also, subjective variables were measured to find the actual mean vote (AMV) through surveys distributed among occupants in the selected case study office. All the variables collected were analyzed and classified according to international standards ISO 7730 and ASHRAE55. The results of this study showed improvement in both PMV and AMV. The mean value of PMV based on the original design was 0.691 which dropped to 0.32 after installation and it still at comfort zone. Also, the mean value of the AMV has improved for the first occupant, where before it was -0.46 and it became -1 which is cooler. For the other occupant, it was slightly warm with a mean value of 0.9 and it was improved and became cooler with a -0.25 mean value based on American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) seven-point scale.

Keywords: thermal comfort, office buildings, indoor environments, predicted mean vote

Procedia PDF Downloads 187
5944 Nano-Structured Hydrophobic Silica Membrane for Gas Separation

Authors: Sajid Shah, Yoshimitsu Uemura, Katsuki Kusakabe

Abstract:

Sol-gel derived hydrophobic silica membranes with pore sizes less than 1 nm are quite attractive for gas separation in a wide range of temperatures. A nano-structured hydrophobic membrane was prepared by sol-gel technique on a porous α–Al₂O₃ tubular support with yttria stabilized zirconia (YSZ) as an intermediate layer. Bistriethoxysilylethane (BTESE) derived sol was modified by adding phenyltriethoxysilylethane (PhTES) as an organic template. Six times dip coated modified silica membrane having a thickness of about 782 nm was characterized by field emission scanning electron microscopy. Thermogravimetric analysis, together along contact angle and Fourier transform infrared spectroscopy, showed that hydrophobic properties were improved by increasing the PhTES content. The contact angle of water droplet increased from 37° for pure to 111.5° for the modified membrane. The permeance of single gas H₂ was higher than H₂:CO₂ ratio of 75:25 binary feed mixtures. However, the permeance of H₂ for 60:40 H₂:CO₂ was found lower than single and binary mixture 75:25 H₂:CO₂. The binary selectivity values for 75:25 H₂:CO₂ were 24.75, 44, and 57, respectively. Selectivity had an inverse relation with PhTES content. Hydrophobicity properties were improved by increasing PhTES content in the silica matrix. The system exhibits proper three layers adhesion or integration, and smoothness. Membrane system suitable in steam environment and high-temperature separation. It was concluded that the hydrophobic silica membrane is highly promising for the separation of H₂/CO₂ mixture from various H₂-containing process streams.

Keywords: gas separation, hydrophobic properties, silica membrane, sol–gel method

Procedia PDF Downloads 119
5943 Open-Loop Vector Control of Induction Motor with Space Vector Pulse Width Modulation Technique

Authors: Karchung, S. Ruangsinchaiwanich

Abstract:

This paper presents open-loop vector control method of induction motor with space vector pulse width modulation (SVPWM) technique. Normally, the closed loop speed control is preferred and is believed to be more accurate. However, it requires a position sensor to track the rotor position which is not desirable to use it for certain workspace applications. This paper exhibits the performance of three-phase induction motor with the simplest control algorithm without the use of a position sensor nor an estimation block to estimate rotor position for sensorless control. The motor stator currents are measured and are transformed to synchronously rotating (d-q-axis) frame by use of Clarke and Park transformation. The actual control happens in this frame where the measured currents are compared with the reference currents. The error signal is fed to a conventional PI controller, and the corrected d-q voltage is generated. The controller outputs are transformed back to three phase voltages and are fed to SVPWM block which generates PWM signal for the voltage source inverter. The open loop vector control model along with SVPWM algorithm is modeled in MATLAB/Simulink software and is experimented and validated in TMS320F28335 DSP board.

Keywords: electric drive, induction motor, open-loop vector control, space vector pulse width modulation technique

Procedia PDF Downloads 143
5942 Study on Acoustic Source Detection Performance Improvement of Microphone Array Installed on Drones Using Blind Source Separation

Authors: Youngsun Moon, Yeong-Ju Go, Jong-Soo Choi

Abstract:

Most drones that currently have surveillance/reconnaissance missions are basically equipped with optical equipment, but we also need to use a microphone array to estimate the location of the acoustic source. This can provide additional information in the absence of optical equipment. The purpose of this study is to estimate Direction of Arrival (DOA) based on Time Difference of Arrival (TDOA) estimation of the acoustic source in the drone. The problem is that it is impossible to measure the clear target acoustic source because of the drone noise. To overcome this problem is to separate the drone noise and the target acoustic source using Blind Source Separation(BSS) based on Independent Component Analysis(ICA). ICA can be performed assuming that the drone noise and target acoustic source are independent and each signal has non-gaussianity. For maximized non-gaussianity each signal, we use Negentropy and Kurtosis based on probability theory. As a result, we can improve TDOA estimation and DOA estimation of the target source in the noisy environment. We simulated the performance of the DOA algorithm applying BSS algorithm, and demonstrated the simulation through experiment at the anechoic wind tunnel.

Keywords: aeroacoustics, acoustic source detection, time difference of arrival, direction of arrival, blind source separation, independent component analysis, drone

Procedia PDF Downloads 153
5941 Hybrid Hunger Games Search Optimization Based on the Neural Networks Approach Applied to UAVs

Authors: Nadia Samantha Zuñiga-Peña, Norberto Hernández-Romero, Omar Aguilar-Mejia, Salatiel García-Nava

Abstract:

Using unmanned aerial vehicles (UAVs) for load transport has gained significant importance in various sectors due to their ability to improve efficiency, reduce costs, and access hard-to-reach areas. Although UAVs offer numerous advantages for load transport, several complications and challenges must be addressed to exploit their potential fully. Complexity relays on UAVs are underactuated, non-linear systems with a high degree of coupling between their variables and are subject to forces with uncertainty. One of the biggest challenges is modeling and controlling the system formed by UAVs carrying a load. In order to solve the controller problem, in this work, a hybridization of Neural Network and Hunger Games Search (HGS) metaheuristic algorithm is developed and implemented to find the parameters of the Super Twisting Sliding Mode Controller for the 8 degrees of freedom model of UAV with payload. The optimized controller successfully tracks the UAV through the three-dimensional desired path, demonstrating the effectiveness of the proposed solution. A comparison of performance shows the superiority of the neural network HGS (NNHGS) over the HGS algorithm, minimizing the tracking error by 57.5 %.

Keywords: neural networks, hunger games search, super twisting sliding mode controller, UAVs.

Procedia PDF Downloads 21
5940 Obtaining High-Dimensional Configuration Space for Robotic Systems Operating in a Common Environment

Authors: U. Yerlikaya, R. T. Balkan

Abstract:

In this research, a method is developed to obtain high-dimensional configuration space for path planning problems. In typical cases, the path planning problems are solved directly in the 3-dimensional (D) workspace. However, this method is inefficient in handling the robots with various geometrical and mechanical restrictions. To overcome these difficulties, path planning may be formalized and solved in a new space which is called configuration space. The number of dimensions of the configuration space comes from the degree of freedoms of the system of interest. The method can be applied in two ways. In the first way, the point clouds of all the bodies of the system and interaction of them are used. The second way is performed via using the clearance function of simulation software where the minimum distances between surfaces of bodies are simultaneously measured. A double-turret system is held in the scope of this study. The 4-D configuration space of a double-turret system is obtained in these two ways. As a result, the difference between these two methods is around 1%, depending on the density of the point cloud. The disparity between the two forms steadily decreases as the point cloud density increases. At the end of the study, in order to verify 4-D configuration space obtained, 4-D path planning problem was realized as 2-D + 2-D and a sample path planning is carried out with using A* algorithm. Then, the accuracy of the configuration space is proved using the obtained paths on the simulation model of the double-turret system.

Keywords: A* algorithm, autonomous turrets, high-dimensional C-space, manifold C-space, point clouds

Procedia PDF Downloads 134
5939 Quality of Service of Transportation Networks: A Hybrid Measurement of Travel Time and Reliability

Authors: Chin-Chia Jane

Abstract:

In a transportation network, travel time refers to the transmission time from source node to destination node, whereas reliability refers to the probability of a successful connection from source node to destination node. With an increasing emphasis on quality of service (QoS), both performance indexes are significant in the design and analysis of transportation systems. In this work, we extend the well-known flow network model for transportation networks so that travel time and reliability are integrated into the QoS measurement simultaneously. In the extended model, in addition to the general arc capacities, each intermediate node has a time weight which is the travel time for per unit of commodity going through the node. Meanwhile, arcs and nodes are treated as binary random variables that switch between operation and failure with associated probabilities. For pre-specified travel time limitation and demand requirement, the QoS of a transportation network is the probability that source can successfully transport the demand requirement to destination while the total transmission time is under the travel time limitation. This work is pioneering, since existing literatures that evaluate travel time reliability via a single optimization path, the proposed QoS focuses the performance of the whole network system. To compute the QoS of transportation networks, we first transfer the extended network model into an equivalent min-cost max-flow network model. In the transferred network, each arc has a new travel time weight which takes value 0. Each intermediate node is replaced by two nodes u and v, and an arc directed from u to v. The newly generated nodes u and v are perfect nodes. The new direct arc has three weights: travel time, capacity, and operation probability. Then the universal set of state vectors is recursively decomposed into disjoint subsets of reliable, unreliable, and stochastic vectors until no stochastic vector is left. The decomposition is made possible by applying existing efficient min-cost max-flow algorithm. Because the reliable subsets are disjoint, QoS can be obtained directly by summing the probabilities of these reliable subsets. Computational experiments are conducted on a benchmark network which has 11 nodes and 21 arcs. Five travel time limitations and five demand requirements are set to compute the QoS value. To make a comparison, we test the exhaustive complete enumeration method. Computational results reveal the proposed algorithm is much more efficient than the complete enumeration method. In this work, a transportation network is analyzed by an extended flow network model where each arc has a fixed capacity, each intermediate node has a time weight, and both arcs and nodes are independent binary random variables. The quality of service of the transportation network is an integration of customer demands, travel time, and the probability of connection. We present a decomposition algorithm to compute the QoS efficiently. Computational experiments conducted on a prototype network show that the proposed algorithm is superior to existing complete enumeration methods.

Keywords: quality of service, reliability, transportation network, travel time

Procedia PDF Downloads 216
5938 Melatonin Improved Vase Quality by Delaying Oxidation Reaction and Supplying More Energies in Cut Peony (Paeonia Lactiflora cv. Sarah)

Authors: Tai Chen, Caihuan Tian, Xiuxia Ren, Jingqi Xue, Xiuxin Zhang

Abstract:

The herbaceous peony has become increasingly popular worldwide in recent years, especially as a cut flower with great economic value. However, peony has a very short vase life, only 3-5 d usually, which seriously affects its commodity value. In this study, we used the cut peony (Paeonia lactiflora cv. Sarah) as a material and found that melatonin treatment significantly improved its postharvest performance. In the control group, its vase life was 4.8 d, accompanied by petal dropping at last; melatonin treatment (40 μM) increased this time to 6.9 d without petal dropping at the end. Further study showed that melatonin treatment significantly increased the activity of antioxidant enzymes as well as reduced sugar content in petals, whereas the starch content in petals decreased. These results indicated that melatonin treatment may delay the oxidation reaction caused by aging, which also provides extra energy for maintaining flowering. Through full-length transcriptome sequencing, a total of 2819 differentially expressed genes (DEGs) between control and melatonin treatment groups were identified. KEGG enrichment analysis showed that these DEGs were mainly involved in three pathways, including melatonin synthesis, starch and sucrose conversion, and plant disease resistance. After the RT-qPCR verification, we identified three DEGs, named PlBAM3, PlWRKY22 and PlTIP1, and they should play major roles in melatonin-improved postharvest performance. One possible reason is that PlBAM3 caused maltose production (by starch degradation), maintained the proline biosynthesis, and then alleviated oxidative stress. Another reason is that both PlBAM3 and PlWRKY22 are key drought resistance regulators, which have the ability to alleviate osmotic stress and improve water absorption, which may also help to improve the postharvest quality of cut peony. In addition, PlTIP1 is involved in the sugar signal pathway, indicating sugar may also as a signal substance during this process. Our work may give new ideas for developing new ways to prolong the vase life of cut peony and improve its commodity value eventually.

Keywords: cut peony, melatonin, vase life, oxidation reaction, energy supply, differentially expressed genes

Procedia PDF Downloads 40
5937 Supercomputer Simulation of Magnetic Multilayers Films

Authors: Vitalii Yu. Kapitan, Aleksandr V. Perzhu, Konstantin V. Nefedev

Abstract:

The necessity of studying magnetic multilayer structures is explained by the prospects of their practical application as a technological base for creating new storages medium. Magnetic multilayer films have many unique features that contribute to increasing the density of information recording and the speed of storage devices. Multilayer structures are structures of alternating magnetic and nonmagnetic layers. In frame of the classical Heisenberg model, lattice spin systems with direct short- and long-range exchange interactions were investigated by Monte Carlo methods. The thermodynamic characteristics of multilayer structures, such as the temperature behavior of magnetization, energy, and heat capacity, were investigated. The processes of magnetization reversal of multilayer structures in external magnetic fields were investigated. The developed software is based on the new, promising programming language Rust. Rust is a new experimental programming language developed by Mozilla. The language is positioned as an alternative to C and C++. For the Monte Carlo simulation, the Metropolis algorithm and its parallel implementation using MPI and the Wang-Landau algorithm were used. We are planning to study of magnetic multilayer films with asymmetric Dzyaloshinskii–Moriya (DM) interaction, interfacing effects and skyrmions textures. This work was supported by the state task of the Ministry of Education and Science of the Russia # 3.7383.2017/8.9

Keywords: The Monte Carlo methods, Heisenberg model, multilayer structures, magnetic skyrmion

Procedia PDF Downloads 163
5936 Numerical Simulation of Two-Dimensional Flow over a Stationary Circular Cylinder Using Feedback Forcing Scheme Based Immersed Boundary Finite Volume Method

Authors: Ranjith Maniyeri, Ahamed C. Saleel

Abstract:

Two-dimensional fluid flow over a stationary circular cylinder is one of the bench mark problem in the field of fluid-structure interaction in computational fluid dynamics (CFD). Motivated by this, in the present work, a two-dimensional computational model is developed using an improved version of immersed boundary method which combines the feedback forcing scheme of the virtual boundary method with Peskin’s regularized delta function approach. Lagrangian coordinates are used to represent the cylinder and Eulerian coordinates are used to describe the fluid flow. A two-dimensional Dirac delta function is used to transfer the quantities between the sold to fluid domain. Further, continuity and momentum equations governing the fluid flow are solved using fractional step based finite volume method on a staggered Cartesian grid system. The developed code is validated by comparing the values of drag coefficient obtained for different Reynolds numbers with that of other researcher’s results. Also, through numerical simulations for different Reynolds numbers flow behavior is well captured. The stability analysis of the improved version of immersed boundary method is tested for different values of feedback forcing coefficients.

Keywords: Feedback Forcing Scheme, Finite Volume Method, Immersed Boundary Method, Navier-Stokes Equations

Procedia PDF Downloads 299
5935 Polyethylenimine-Ethoxylated Dual Interfacial Layers for High-Efficient Quantum Dot Light-Emitting Diodes

Authors: Woosuk Lee

Abstract:

We controlled the electron injection rate in inverted quantum dot light-emitting diode (QLED) by inserting PEIE layer between ZnO electron transport layer(ETL) and quantum dots(QDs) layer and successfully demonstrated high efficiency of QLEDs. The inverted QLED has the layer structure of ITO(cathode)/ ZnO NPs/PEIE/QDs/PEIE/P-TPD/MoO3/Al(anode). The PEIE between poly-TPD hole transport layer (HTL) and quantum dot emitting layer protects QD EML during HTL coating process and improves the surface morphology. In addition, the hole injection barrier is reduced by upshifting the valence band maximum (VBM) of QDs. An additional layer of PEIE was introduced between ZnO and QD to balance charge within QD emissive layer in device, which serves as an effective electron blocking layer without changing device operating condition such as turn-on voltage and emissive spectra. As a result, the optimized QLED with 5nm PEIE shows a ~36% improved current efficiency and external quantum efficiency (EQE) compared to the QLED without PEIE.(maximum current efficiency, and EQE are achieved 70cd/A and 17.3%, respectively). In particular, the maximum brightness of the optimized QLED dramatically improved by a factor of 2.3 relative to the QLED without PEIE. The main reasons for these QLED performance improvement are due to the suppressing the leakage current across the device and well confined exciton by inserting PEIE layers.

Keywords: quantum dot light-emitting diodes, interfacial layer, charge-injection balance, suppressing QD charging

Procedia PDF Downloads 175
5934 Risk Assessment of Natural Gas Pipelines in Coal Mined Gobs Based on Bow-Tie Model and Cloud Inference

Authors: Xiaobin Liang, Wei Liang, Laibin Zhang, Xiaoyan Guo

Abstract:

Pipelines pass through coal mined gobs inevitably in the mining area, the stability of which has great influence on the safety of pipelines. After extensive literature study and field research, it was found that there are a few risk assessment methods for coal mined gob pipelines, and there is a lack of data on the gob sites. Therefore, the fuzzy comprehensive evaluation method is widely used based on expert opinions. However, the subjective opinions or lack of experience of individual experts may lead to inaccurate evaluation results. Hence the accuracy of the results needs to be further improved. This paper presents a comprehensive approach to achieve this purpose by combining bow-tie model and cloud inference. The specific evaluation process is as follows: First, a bow-tie model composed of a fault tree and an event tree is established to graphically illustrate the probability and consequence indicators of pipeline failure. Second, the interval estimation method can be scored in the form of intervals to improve the accuracy of the results, and the censored mean algorithm is used to remove the maximum and minimum values of the score to improve the stability of the results. The golden section method is used to determine the weight of the indicators and reduce the subjectivity of index weights. Third, the failure probability and failure consequence scores of the pipeline are converted into three numerical features by using cloud inference. The cloud inference can better describe the ambiguity and volatility of the results which can better describe the volatility of the risk level. Finally, the cloud drop graphs of failure probability and failure consequences can be expressed, which intuitively and accurately illustrate the ambiguity and randomness of the results. A case study of a coal mine gob pipeline carrying natural gas has been investigated to validate the utility of the proposed method. The evaluation results of this case show that the probability of failure of the pipeline is very low, the consequences of failure are more serious, which is consistent with the reality.

Keywords: bow-tie model, natural gas pipeline, coal mine gob, cloud inference

Procedia PDF Downloads 244