Search results for: inverse distance weighted method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20855

Search results for: inverse distance weighted method

19445 A Simple Autonomous Hovering and Operating Control of Multicopter Using Only Web Camera

Authors: Kazuya Sato, Toru Kasahara, Junji Kuroda

Abstract:

In this paper, an autonomous hovering control method of multicopter using only Web camera is proposed. Recently, various control method of an autonomous flight for multicopter are proposed. But, in the previously proposed methods, a motion capture system (i.e., OptiTrack) and laser range finder are often used to measure the position and posture of multicopter. To achieve an autonomous flight control of multicopter with simple equipment, we propose an autonomous flight control method using AR marker and Web camera. AR marker can measure the position of multicopter with Cartesian coordinate in three dimensional, then its position connects with aileron, elevator, and accelerator throttle operation. A simple PID control method is applied to the each operation and adjust the controller gains. Experimental result are given to show the effectiveness of our proposed method. Moreover, another simple operation method for autonomous flight control multicopter is also proposed.

Keywords: autonomous hovering control, multicopter, Web camera, operation

Procedia PDF Downloads 554
19444 Human Identification Using Local Roughness Patterns in Heartbeat Signal

Authors: Md. Khayrul Bashar, Md. Saiful Islam, Kimiko Yamashita, Yano Midori

Abstract:

Despite having some progress in human authentication, conventional biometrics (e.g., facial features, fingerprints, retinal scans, gait, voice patterns) are not robust against falsification because they are neither confidential nor secret to an individual. As a non-invasive tool, electrocardiogram (ECG) has recently shown a great potential in human recognition due to its unique rhythms characterizing the variability of human heart structures (chest geometry, sizes, and positions). Moreover, ECG has a real-time vitality characteristic that signifies the live signs, which ensure legitimate individual to be identified. However, the detection accuracy of the current ECG-based methods is not sufficient due to a high variability of the individual’s heartbeats at a different instance of time. These variations may occur due to muscle flexure, the change of mental or emotional states, and the change of sensor positions or long-term baseline shift during the recording of ECG signal. In this study, a new method is proposed for human identification, which is based on the extraction of the local roughness of ECG heartbeat signals. First ECG signal is preprocessed using a second order band-pass Butterworth filter having cut-off frequencies of 0.00025 and 0.04. A number of local binary patterns are then extracted by applying a moving neighborhood window along the ECG signal. At each instant of the ECG signal, the pattern is formed by comparing the ECG intensities at neighboring time points with the central intensity in the moving window. Then, binary weights are multiplied with the pattern to come up with the local roughness description of the signal. Finally, histograms are constructed that describe the heartbeat signals of individual subjects in the database. One advantage of the proposed feature is that it does not depend on the accuracy of detecting QRS complex, unlike the conventional methods. Supervised recognition methods are then designed using minimum distance to mean and Bayesian classifiers to identify authentic human subjects. An experiment with sixty (60) ECG signals from sixty adult subjects from National Metrology Institute of Germany (NMIG) - PTB database, showed that the proposed new method is promising compared to a conventional interval and amplitude feature-based method.

Keywords: human identification, ECG biometrics, local roughness patterns, supervised classification

Procedia PDF Downloads 400
19443 Multi-Criteria Assessment of Biogas Feedstock

Authors: Rawan Hakawati, Beatrice Smyth, David Rooney, Geoffrey McCullough

Abstract:

Targets have been set in the EU to increase the share of renewable energy consumption to 20% by 2020, but developments have not occurred evenly across the member states. Northern Ireland is almost 90% dependent on imported fossil fuels. With such high energy dependency, Northern Ireland is particularly susceptible to the security of supply issues. Linked to fossil fuels are greenhouse gas emissions, and the EU plans to reduce emissions by 20% by 2020. The use of indigenously produced biomass could reduce both greenhouse gas emissions and external energy dependence. With a wide range of both crop and waste feedstock potentially available in Northern Ireland, anaerobic digestion has been put forward as a possible solution for renewable energy production, waste management, and greenhouse gas reduction. Not all feedstock, however, is the same, and an understanding of feedstock suitability is important for both plant operators and policy makers. The aim of this paper is to investigate biomass suitability for anaerobic digestion in Northern Ireland. It is also important that decisions are based on solid scientific evidence. For this reason, the methodology used is multi-criteria decision matrix analysis which takes multiple criteria into account simultaneously and ranks alternatives accordingly. The model uses the weighted sum method (which follows the Entropy Method to measure uncertainty using probability theory) to decide on weights. The Topsis method is utilized to carry out the mathematical analysis to provide the final scores. Feedstock that is currently available in Northern Ireland was classified into two categories: wastes (manure, sewage sludge and food waste) and energy crops, specifically grass silage. To select the most suitable feedstock, methane yield, feedstock availability, feedstock production cost, biogas production, calorific value, produced kilowatt-hours, dry matter content, and carbon to nitrogen ratio were assessed. The highest weight (0.249) corresponded to production cost reflecting a variation of £41 gate fee to 22£/tonne cost. The weights calculated found that grass silage was the most suitable feedstock. A sensitivity analysis was then conducted to investigate the impact of weights. The analysis used the Pugh Matrix Method which relies upon The Analytical Hierarchy Process and pairwise comparisons to determine a weighting for each criterion. The results showed that the highest weight (0.193) corresponded to biogas production indicating that grass silage and manure are the most suitable feedstock. Introducing co-digestion of two or more substrates can boost the biogas yield due to a synergistic effect induced by the feedstock to favor positive biological interactions. A further benefit of co-digesting manure is that the anaerobic digestion process also acts as a waste management strategy. From the research, it was concluded that energy from agricultural biomass is highly advantageous in Northern Ireland because it would increase the country's production of renewable energy, manage waste production, and would limit the production of greenhouse gases (current contribution from agriculture sector is 26%). Decision-making methods based on scientific evidence aid policy makers in classifying multiple criteria in a logical mathematical manner in order to reach a resolution.

Keywords: anaerobic digestion, biomass as feedstock, decision matrix, renewable energy

Procedia PDF Downloads 453
19442 Effects of Plasma Technology in Biodegradable Films for Food Packaging

Authors: Viviane P. Romani, Bradley D. Olsen, Vilásia G. Martins

Abstract:

Biodegradable films for food packaging have gained growing attention due to environmental pollution caused by synthetic films and the interest in the better use of resources from nature. Important research advances were made in the development of materials from proteins, polysaccharides, and lipids. However, the commercial use of these new generation of sustainable materials for food packaging is still limited due to their low mechanical and barrier properties that could compromise the food quality and safety. Thus, strategies to improve the performance of these materials have been tested, such as chemical modifications, incorporation of reinforcing structures and others. Cold plasma is a versatile, fast and environmentally friendly technology. It consists of a partially ionized gas containing free electrons, ions, and radicals and neutral particles able to react with polymers and start different reactions, leading to the polymer degradation, functionalization, etching and/or cross-linking. In the present study, biodegradable films from fish protein prepared through the casting technique were plasma treated using an AC glow discharge equipment. The reactor was preliminary evacuated to ~7 Pa and the films were exposed to air plasma for 2, 5 and 8 min. The films were evaluated by their mechanical and water vapor permeability (WVP) properties and changes in the protein structure were observed using Scanning Electron Microscopy (SEM) and X-ray diffraction (XRD). Potential cross-links and elimination of surface defects by etching might be the reason for the increase in tensile strength and decrease in the elongation at break observed. Among the times of plasma application tested, no differences were observed when higher times of exposure were used. The X-ray pattern showed a broad peak at 2θ = 19.51º that corresponds to the distance of 4.6Å by applying the Bragg’s law. This distance corresponds to the average backbone distance within the α-helix. Thus, the changes observed in the films might indicate that the helical configuration of fish protein was disturbed by plasma treatment. SEM images showed surface damage in the films with 5 and 8 min of plasma treatment, indicating that 2 min was the most adequate time of treatment. It was verified that plasma removes water from the films once weight loss of 4.45% was registered for films treated during 2 min. However, after 24 h in 50% of relative humidity, the water lost was recovered. WVP increased from 0.53 to 0.65 g.mm/h.m².kPa after plasma treatment during 2 min, that is desired for some foods applications which require water passage through the packaging. In general, the plasma technology affects the properties and structure of fish protein films. Since this technology changes the surface of polymers, these films might be used to develop multilayer materials, as well as to incorporate active substances in the surface to obtain active packaging.

Keywords: fish protein films, food packaging, improvement of properties, plasma treatment

Procedia PDF Downloads 159
19441 Application of Double Side Approach Method on Super Elliptical Winkler Plate

Authors: Hsiang-Wen Tang, Cheng-Ying Lo

Abstract:

In this study, the static behavior of super elliptical Winkler plate is analyzed by applying the double side approach method. The lack of information about super elliptical Winkler plates is the motivation of this study and we use the double side approach method to solve this problem because of its superior ability on efficiently treating problems with complex boundary shape. The double side approach method has the advantages of high accuracy, easy calculation procedure and less calculation load required. Most important of all, it can give the error bound of the approximate solution. The numerical results not only show that the double side approach method works well on this problem but also provide us the knowledge of static behavior of super elliptical Winkler plate in practical use.

Keywords: super elliptical winkler plate, double side approach method, error bound, mechanic

Procedia PDF Downloads 348
19440 Determination of Crustal Structure and Moho Depth within the Jammu and Kashmir Region, Northwest Himalaya through Receiver Function

Authors: Shiv Jyoti Pandey, Shveta Puri, G. M. Bhat, Neha Raina

Abstract:

The Jammu and Kashmir (J&K) region of Northwest Himalaya has a long history of earthquake activity which falls within Seismic Zones IV and V. To know the crustal structure beneath this region, we utilized teleseismic receiver function method. This paper presents the results of the analyses of the teleseismic earthquake waves recorded by 10 seismic observatories installed in the vicinity of major thrusts and faults. The teleseismic waves at epicentral distance between 30o and 90o with moment magnitudes greater than or equal to 5.5 that contains large amount of information about the crust and upper mantle structure directly beneath a receiver has been used. The receiver function (RF) technique has been widely applied to investigate crustal structures using P-to-S converted (Ps) phases from velocity discontinuities. The arrival time of the Ps, PpPs and PpSs+ PsPs converted and reverberated phases from the Moho can be combined to constrain the mean crustal thickness and Vp/Vs ratio. Over 500 receiver functions from 10 broadband stations located in the Jammu & Kashmir region of Northwest Himalaya were analyzed. With the help of H-K stacking method, we determined the crustal thickness (H) and average crustal Vp/Vs ratio (K) in this region. We also used Neighbourhood algorithm technique to verify our results. The receiver function results for these stations show that the crustal thickness under Jammu & Kashmir ranges from 45.0 to 53.6 km with an average value of 50.01 km. The Vp/Vs ratio varies from 1.63 to 1.99 with an average value of 1.784 which corresponds to an average Poisson’s ratio of 0.266 with a range from 0.198 to 0.331. High Poisson’s ratios under some stations may be related to partial melting in the crust near the uppermost mantle. The crustal structure model developed from this study can be used to refine the velocity model used in the precise epicenter location in the region, thereby increasing the knowledge to understand current seismicity in the region.

Keywords: H-K stacking, Poisson’s ratios, receiver function, teleseismic

Procedia PDF Downloads 244
19439 Efficacy of Celecoxib Adjunct Treatment on Bipolar Disorder: Systematic Review and Meta-Analysis

Authors: Daniela V. Bavaresco, Tamy Colonetti, Antonio Jose Grande, Francesc Colom, Joao Quevedo, Samira S. Valvassori, Maria Ines da Rosa

Abstract:

Objective: Performed a systematic review and meta-analysis to evaluated the potential effect of the cyclo-oxygenases (Cox)-2 inhibitor Celecoxib adjunct treatment in Bipolar Disorder (BD), through of randomized controlled trials. Method: A search of the electronic databases was proceeded, on MEDLINE, EMBASE, Scopus, Cochrane Central Register of Controlled Trials (CENTRAL), Biomed Central, Web of Science, IBECS, LILACS, PsycINFO (American Psychological Association), Congress Abstracts, and Grey literature (Google Scholar and the British Library) for studies published from January 1990 to February 2018. A search strategy was developed using the terms: 'Bipolar disorder' or 'Bipolar mania' or 'Bipolar depression' or 'Bipolar mixed' or 'Bipolar euthymic' and 'Celecoxib' or 'Cyclooxygenase-2 inhibitors' or 'Cox-2 inhibitors' as text words and Medical Subject Headings (i.e., MeSH and EMTREE) and searched. The therapeutic effects of adjunctive treatment with Celecoxib were analyzed, it was possible to carry out a meta-analysis of three studies included in the systematic review. The meta-analysis was performed including the final results of the Young Mania Rating Scale (YMRS) at the end of randomized controlled trials (RCT). Results: Three primary studies were included in the systematic review, with a total of 121 patients. The meta-analysis had significant effect in the YMRS scores from patients with BD who used Celecoxib adjuvant treatment in comparison to placebo. The weighted mean difference was 5.54 (95%CI=3.26-7.82); p < 0.001; I2 =0%). Conclusion: The systematic review suggests that adjuvant treatment with Celecoxib improves the response of major treatments in patients with BD when compared with adjuvant placebo treatment.

Keywords: bipolar disorder, Cox-2 inhibitors, Celecoxib, systematic review, meta-analysis

Procedia PDF Downloads 484
19438 A Simulation Model to Analyze the Impact of Virtual Responsiveness in an E-Commerce Supply Chain

Authors: T. Godwin

Abstract:

The design of a supply chain always entails the trade-off between responsiveness and efficiency. The launch of e-commerce has not only changed the way of shopping but also altered the supply chain design while trading off efficiency with responsiveness. A concept called ‘virtual responsiveness’ is introduced in the context of e-commerce supply chain. A simulation model is developed to compare actual responsiveness and virtual responsiveness to the customer in an e-commerce supply chain. The simulation is restricted to the movement of goods from the e-tailer to the customer. Customer demand follows a statistical distribution and is generated using inverse transformation technique. The two responsiveness schemes of the supply chain are compared in terms of the minimum number of inventory required at the e-tailer to fulfill the orders. Computational results show the savings achieved through virtual responsiveness. The insights gained from this study could be used to redesign e-commerce supply chain by incorporating virtual responsiveness. A part of the achieved cost savings could be passed back to the customer, thereby making the supply chain both effective and competitive.

Keywords: e-commerce, simulation modeling, supply chain, virtual responsiveness

Procedia PDF Downloads 341
19437 Correlation to Predict the Effect of Particle Type on Axial Voidage Profile in Circulating Fluidized Beds

Authors: M. S. Khurram, S. A. Memon, S. Khan

Abstract:

Bed voidage behavior among different flow regimes for Geldart A, B, and D particles (fluid catalytic cracking catalyst (FCC), particle A and glass beads) of diameter range 57-872 μm, apparent density 1470-3092 kg/m3, and bulk density range 890-1773 kg/m3 were investigated in a gas-solid circulating fluidized bed of 0.1 m-i.d. and 2.56 m-height of plexi-glass. Effects of variables (gas velocity, particle properties, and static bed height) were analyzed on bed voidage. The axial voidage profile showed a typical trend along the riser: a dense bed at the lower part followed by a transition in the splash zone and a lean phase in the freeboard. Bed expansion and dense bed voidage increased with an increase of gas velocity as usual. From experimental results, a generalized model relationship based on inverse fluidization number for dense bed voidage from bubbling to fast fluidization regimes was presented.

Keywords: axial voidage, circulating fluidized bed, splash zone, static bed

Procedia PDF Downloads 281
19436 Study the Dynamic Behavior of Irregular Buildings by the Analysis Method Accelerogram

Authors: Beciri Mohamed Walid

Abstract:

Some architectural conditions required some shapes often lead to an irregular distribution of masses, rigidities and resistances. The main object of the present study consists in estimating the influence of the irregularity both in plan and in elevation which presenting some structures on the dynamic characteristics and his influence on the behavior of this structures. To do this, it is necessary to make apply both dynamic methods proposed by the RPA99 (spectral modal method and method of analysis by accelerogram) on certain similar prototypes and to analyze the parameters measuring the answer of these structures and to proceed to a comparison of the results.

Keywords: structure, irregular, code, seismic, method, force, period

Procedia PDF Downloads 304
19435 Solvent Extraction, Spectrophotometric Determination of Antimony(III) from Real Samples and Synthetic Mixtures Using O-Methylphenyl Thiourea as a Sensitive Reagent

Authors: Shashikant R. Kuchekar, Shivaji D. Pulate, Vishwas B. Gaikwad

Abstract:

A simple and selective method is developed for solvent extraction spectrophotometric determination of antimony(III) using O-Methylphenyl Thiourea (OMPT) as a sensitive chromogenic chelating agent. The basis of proposed method is formation of antimony(III)-OMPT complex was extracted with 0.0025 M OMPT in chloroform from aqueous solution of antimony(III) in 1.0 M perchloric acid. The absorbance of this complex was measured at 297 nm against reagent blank. Beer’s law was obeyed up to 15µg mL-1 of antimony(III). The Molar absorptivity and Sandell’s sensitivity of the antimony(III)-OMPT complex in chloroform are 16.6730 × 103 L mol-1 cm-1 and 0.00730282 µg cm-2 respectively. The stoichiometry of antimony(III)-OMPT complex was established from slope ratio method, mole ratio method and Job’s continuous variation method was 1:2. The complex was stable for more than 48 h. The interfering effect of various foreign ions was studied and suitable masking agents are used wherever necessary to enhance selectivity of the method. The proposed method is successfully applied for determination of antimony(III) from real samples alloy and synthetic mixtures. Repetition of the method was checked by finding relative standard deviation (RSD) for 10 determinations which was 0.42%.

Keywords: solvent extraction, antimony, spectrophotometry, real sample analysis

Procedia PDF Downloads 329
19434 Software-Defined Architecture and Front-End Optimization for DO-178B Compliant Distance Measuring Equipment

Authors: Farzan Farhangian, Behnam Shakibafar, Bobda Cedric, Rene Jr. Landry

Abstract:

Among the air navigation technologies, many of them are capable of increasing aviation sustainability as well as accuracy improvement in Alternative Positioning, Navigation, and Timing (APNT), especially avionics Distance Measuring Equipment (DME), Very high-frequency Omni-directional Range (VOR), etc. The integration of these air navigation solutions could make a robust and efficient accuracy in air mobility, air traffic management and autonomous operations. Designing a proper RF front-end, power amplifier and software-defined transponder could pave the way for reaching an optimized avionics navigation solution. In this article, the possibility of reaching an optimum front-end to be used with single low-cost Software-Defined Radio (SDR) has been investigated in order to reach a software-defined DME architecture. Our software-defined approach uses the firmware possibilities to design a real-time software architecture compatible with a Multi Input Multi Output (MIMO) BladeRF to estimate an accurate time delay between a Transmission (Tx) and the reception (Rx) channels using the synchronous scheduled communication. We could design a novel power amplifier for the transmission channel of the DME to pass the minimum transmission power. This article also investigates designing proper pair pulses based on the DO-178B avionics standard. Various guidelines have been tested, and the possibility of passing the certification process for each standard term has been analyzed. Finally, the performance of the DME was tested in the laboratory environment using an IFR6000, which showed that the proposed architecture reached an accuracy of less than 0.23 Nautical mile (Nmi) with 98% probability.

Keywords: avionics, DME, software defined radio, navigation

Procedia PDF Downloads 75
19433 A Multilevel Analysis of Predictors of Early Antenatal Care Visits among Women of Reproductive Age in Benin: 2017/2018 Benin Demographic and Health Survey

Authors: Ebenezer Kwesi Armah-Ansah, Kenneth Fosu Oteng, Esther Selasi Avinu, Eugene Budu, Edward Kwabena Ameyaw

Abstract:

Background: Maternal mortality, particularly in Benin, is a major public health concern in Sub-Saharan Africa. To provide a positive pregnancy experience and reduce maternal morbidities, all pregnant women must get appropriate and timely prenatal support. However, many pregnant women in developing countries, including Benin, begin antenatal care late. There is a paucity of empirical literature on the prevalence and predictors of early antenatal care visits in Benin. As a result, the purpose of this study is to investigate the prevalence and predictors of early antenatal care visits among women of productive age in Benin. Methods: This is a secondary analysis of the 2017/2018 Benin Demographic and Health Survey (BDHS) data. The study involved 6,919 eligible women. Data analysis was conducted using Stata version 14.2 for Mac OS. We adopted a multilevel logistic regression to examine the predictors of early ANC visits in Benin. The results were presented as odds ratios (ORs) associated with 95% confidence intervals (CIs) and p-value <0.05 to determine the significant associations. Results: The prevalence of early ANC visits among pregnant women in Benin was 57.03% [95% CI: 55.41-58.64]. In the final multilevel logistic regression, early ANC visit was higher among women aged 30-34 [aOR=1.60, 95% CI=1.17-2.18] compared to those aged 15-19, women with primary education [aOR=1.22, 95% CI=1.06-142] compared to the non-educated women, women who were covered by health insurance [aOR=3.03, 95% CI=1.35-6.76], women without a big problem in getting the money needed for treatment [aOR=1.31, 95% CI=1.16-1.49], distance to the health facility, not a big problem [aOR=1.23, 95% CI=1.08-1.41], and women whose partners had secondary/higher education [aOR=1.35, 95% CI=1.15-1.57] compared with those who were not covered by health insurance, had big problem in getting money needed for treatment, distance to health facility is a big problem and whose partners had no education respectively. However, women who had four or more births [aOR=0.60, 95% CI=0.48-0.74] and those in Atacora Region [aOR=0.50, 95% CI=0.37-0.68] had lower odds of early ANC visit. Conclusion: This study revealed a relatively high prevalence of early ANC visits among women of reproductive age in Benin. Women's age, educational status of women and their partners, parity, health insurance coverage, distance to health facilities, and region were all associated with early ANC visits among women of reproductive in Benin. These factors ought to be taken into account when developing ANC policies and strategies in order to boost early ANC visits among women in Benin. This will significantly reduce maternal and newborn mortality and help achieve the World Health Organization’s recommendation that all pregnant women should initiate early ANC visits within the first three months of pregnancy.

Keywords: antenatal care, Benin, maternal health, pregnancy, DHS, public health

Procedia PDF Downloads 57
19432 Mapping the Suitable Sites for Food Grain Crops Using Geographical Information System (GIS) and Analytical Hierarchy Process (AHP)

Authors: Md. Monjurul Islam, Tofael Ahamed, Ryozo Noguchi

Abstract:

Progress continues in the fight against hunger, yet an unacceptably large number of people still lack food they need for an active and healthy life. Bangladesh is one of the rising countries in the South-Asia but still lots of people are food insecure. In the last few years, Bangladesh has significant achievements in food grain production but still food security at national to individual levels remain a matter of major concern. Ensuring food security for all is one of the major challenges that Bangladesh faces today, especially production of rice in the flood and poverty prone areas. Northern part is more vulnerable than any other part of Bangladesh. To ensure food security, one of the best way is to increase domestic production. To increase production, it is necessary to secure lands for achieving optimum utilization of resources. One of the measures is to identify the vulnerable and potential areas using Land Suitability Assessment (LSA) to increase rice production in the poverty prone areas. Therefore, the aim of the study was to identify the suitable sites for food grain crop rice production in the poverty prone areas located at the northern part of Bangladesh. Lack of knowledge on the best combination of factors that suit production of rice has contributed to the low production. To fulfill the research objective, a multi-criteria analysis was done and produced a suitable map for crop production with the help of Geographical Information System (GIS) and Analytical Hierarchy Process (AHP). Primary and secondary data were collected from ground truth information and relevant offices. The suitability levels for each factor were ranked based on the structure of FAO land suitability classification as: Currently Not Suitable (N2), Presently Not Suitable (N1), Marginally Suitable (S3), Moderately Suitable (S2) and Highly Suitable (S1). The suitable sites were identified using spatial analysis and compared with the recent raster image from Google Earth Pro® to validate the reliability of suitability analysis. For producing a suitability map for rice farming using GIS and multi-criteria analysis tool, AHP was used to rank the relevant factors, and the resultant weights were used to create the suitability map using weighted sum overlay tool in ArcGIS 10.3®. Then, the suitability map for rice production in the study area was formed. The weighted overly was performed and found that 22.74 % (1337.02 km2) of the study area was highly suitable, while 28.54% (1678.04 km2) was moderately suitable, 14.86% (873.71 km2) was marginally suitable, and 1.19% (69.97 km2) was currently not suitable for rice farming. On the other hand, 32.67% (1920.87 km2) was permanently not suitable which occupied with settlements, rivers, water bodies and forests. This research provided information at local level that could be used by farmers to select suitable fields for rice production, and then it can be applied to other crops. It will also be helpful for the field workers and policy planner who serves in the agricultural sector.

Keywords: AHP, GIS, spatial analysis, land suitability

Procedia PDF Downloads 230
19431 A Dynamical Study of Fractional Order Obesity Model by a Combined Legendre Wavelet Method

Authors: Hakiki Kheira, Belhamiti Omar

Abstract:

In this paper, we propose a new compartmental fractional order model for the simulation of epidemic obesity dynamics. Using the Legendre wavelet method combined with the decoupling and quasi-linearization technique, we demonstrate the validity and applicability of our model. We also present some fractional differential illustrative examples to demonstrate the applicability and efficiency of the method. The fractional derivative is described in the Caputo sense.

Keywords: Caputo derivative, epidemiology, Legendre wavelet method, obesity

Procedia PDF Downloads 415
19430 Landslide and Liquefaction Vulnerability Analysis Using Risk Assessment Analysis and Analytic Hierarchy Process Implication: Suitability of the New Capital of the Republic of Indonesia on Borneo Island

Authors: Rifaldy, Misbahudin, Khalid Rizky, Ricky Aryanto, M. Alfiyan Bagus, Fahri Septianto, Firman Najib Wibisana, Excobar Arman

Abstract:

Indonesia is a country that has a high level of disaster because it is on the ring of fire, and there are several regions with three major plates meeting in the world. So that disaster analysis must always be done to see the potential disasters that might always occur, especially in this research are landslides and liquefaction. This research was conducted to analyze areas that are vulnerable to landslides and liquefaction hazards and their relationship with the assessment of the issue of moving the new capital of the Republic of Indonesia to the island of Kalimantan with a total area of 612,267.22 km². The method in this analysis uses the Analytical Hierarchy Process and consistency ratio testing as a complex and unstructured problem-solving process into several parameters by providing values. The parameters used in this analysis are the slope, land cover, lithology distribution, wetness index, earthquake data, peak ground acceleration. Weighted overlay was carried out from all these parameters using the percentage value obtained from the Analytical Hierarchy Process and confirmed its accuracy with a consistency ratio so that a percentage of the area obtained with different vulnerability classification values was obtained. Based on the analysis results obtained vulnerability classification from very high to low vulnerability. There are (0.15%) 918.40083 km² of highly vulnerable, medium (20.75%) 127,045,44815 km², low (56.54%) 346,175.886188 km², very low (22.56%) 138,127.484832 km². This research is expected to be able to map landslides and liquefaction disasters on the island of Kalimantan and provide consideration of the suitability of regional development of the new capital of the Republic of Indonesia. Also, this research is expected to provide input or can be applied to all regions that are analyzing the vulnerability of landslides and liquefaction or the suitability of the development of certain regions.

Keywords: analytic hierarchy process, Borneo Island, landslide and liquefaction, vulnerability analysis

Procedia PDF Downloads 163
19429 Singular Perturbed Vector Field Method Applied to the Problem of Thermal Explosion of Polydisperse Fuel Spray

Authors: Ophir Nave

Abstract:

In our research, we present the concept of singularly perturbed vector field (SPVF) method, and its application to thermal explosion of diesel spray combustion. Given a system of governing equations, which consist of hidden Multi-scale variables, the SPVF method transfer and decompose such system to fast and slow singularly perturbed subsystems (SPS). The SPVF method enables us to understand the complex system, and simplify the calculations. Later powerful analytical, numerical and asymptotic methods (e.g method of integral (invariant) manifold (MIM), the homotopy analysis method (HAM) etc.) can be applied to each subsystem. We compare the results obtained by the methods of integral invariant manifold and SPVF apply to spray droplets combustion model. The research deals with the development of an innovative method for extracting fast and slow variables in physical mathematical models. The method that we developed called singular perturbed vector field. This method based on a numerical algorithm applied to global quasi linearization applied to given physical model. The SPVF method applied successfully to combustion processes. Our results were compared to experimentally results. The SPVF is a general numerical and asymptotical method that reveals the hierarchy (multi-scale system) of a given system.

Keywords: polydisperse spray, model reduction, asymptotic analysis, multi-scale systems

Procedia PDF Downloads 214
19428 Optimization of Vertical Axis Wind Turbine Based on Artificial Neural Network

Authors: Mohammed Affanuddin H. Siddique, Jayesh S. Shukla, Chetan B. Meshram

Abstract:

The neural networks are one of the power tools of machine learning. After the invention of perceptron in early 1980's, the neural networks and its application have grown rapidly. Neural networks are a technique originally developed for pattern investigation. The structure of a neural network consists of neurons connected through synapse. Here, we have investigated the different algorithms and cost function reduction techniques for optimization of vertical axis wind turbine (VAWT) rotor blades. The aerodynamic force coefficients corresponding to the airfoils are stored in a database along with the airfoil coordinates. A forward propagation neural network is created with the input as aerodynamic coefficients and output as the airfoil co-ordinates. In the proposed algorithm, the hidden layer is incorporated into cost function having linear and non-linear error terms. In this article, it is observed that the ANNs (Artificial Neural Network) can be used for the VAWT’s optimization.

Keywords: VAWT, ANN, optimization, inverse design

Procedia PDF Downloads 317
19427 A Periodogram-Based Spectral Method Approach: The Relationship between Tourism and Economic Growth in Turkey

Authors: Mesut BALIBEY, Serpil TÜRKYILMAZ

Abstract:

A popular topic in the econometrics and time series area is the cointegrating relationships among the components of a nonstationary time series. Engle and Granger’s least squares method and Johansen’s conditional maximum likelihood method are the most widely-used methods to determine the relationships among variables. Furthermore, a method proposed to test a unit root based on the periodogram ordinates has certain advantages over conventional tests. Periodograms can be calculated without any model specification and the exact distribution under the assumption of a unit root is obtained. For higher order processes the distribution remains the same asymptotically. In this study, in order to indicate advantages over conventional test of periodograms, we are going to examine a possible relationship between tourism and economic growth during the period 1999:01-2010:12 for Turkey by using periodogram method, Johansen’s conditional maximum likelihood method, Engle and Granger’s ordinary least square method.

Keywords: cointegration, economic growth, periodogram ordinate, tourism

Procedia PDF Downloads 264
19426 Solid Particles Transport and Deposition Prediction in a Turbulent Impinging Jet Using the Lattice Boltzmann Method and a Probabilistic Model on GPU

Authors: Ali Abdul Kadhim, Fue Lien

Abstract:

Solid particle distribution on an impingement surface has been simulated utilizing a graphical processing unit (GPU). In-house computational fluid dynamics (CFD) code has been developed to investigate a 3D turbulent impinging jet using the lattice Boltzmann method (LBM) in conjunction with large eddy simulation (LES) and the multiple relaxation time (MRT) models. This paper proposed an improvement in the LBM-cellular automata (LBM-CA) probabilistic method. In the current model, the fluid flow utilizes the D3Q19 lattice, while the particle model employs the D3Q27 lattice. The particle numbers are defined at the same regular LBM nodes, and transport of particles from one node to its neighboring nodes are determined in accordance with the particle bulk density and velocity by considering all the external forces. The previous models distribute particles at each time step without considering the local velocity and the number of particles at each node. The present model overcomes the deficiencies of the previous LBM-CA models and, therefore, can better capture the dynamic interaction between particles and the surrounding turbulent flow field. Despite the increasing popularity of LBM-MRT-CA model in simulating complex multiphase fluid flows, this approach is still expensive in term of memory size and computational time required to perform 3D simulations. To improve the throughput of each simulation, a single GeForce GTX TITAN X GPU is used in the present work. The CUDA parallel programming platform and the CuRAND library are utilized to form an efficient LBM-CA algorithm. The methodology was first validated against a benchmark test case involving particle deposition on a square cylinder confined in a duct. The flow was unsteady and laminar at Re=200 (Re is the Reynolds number), and simulations were conducted for different Stokes numbers. The present LBM solutions agree well with other results available in the open literature. The GPU code was then used to simulate the particle transport and deposition in a turbulent impinging jet at Re=10,000. The simulations were conducted for L/D=2,4 and 6, where L is the nozzle-to-surface distance and D is the jet diameter. The effect of changing the Stokes number on the particle deposition profile was studied at different L/D ratios. For comparative studies, another in-house serial CPU code was also developed, coupling LBM with the classical Lagrangian particle dispersion model. Agreement between results obtained with LBM-CA and LBM-Lagrangian models and the experimental data is generally good. The present GPU approach achieves a speedup ratio of about 350 against the serial code running on a single CPU.

Keywords: CUDA, GPU parallel programming, LES, lattice Boltzmann method, MRT, multi-phase flow, probabilistic model

Procedia PDF Downloads 203
19425 Optimal Control of Generators and Series Compensators within Multi-Space-Time Frame

Authors: Qian Chen, Lin Xu, Ping Ju, Zhuoran Li, Yiping Yu, Yuqing Jin

Abstract:

The operation of power grid is becoming more and more complex and difficult due to its rapid development towards high voltage, long distance, and large capacity. For instance, many large-scale wind farms have connected to power grid, where their fluctuation and randomness is very likely to affect the stability and safety of the grid. Fortunately, many new-type equipments based on power electronics have been applied to power grid, such as UPFC (Unified Power Flow Controller), TCSC (Thyristor Controlled Series Compensation), STATCOM (Static Synchronous Compensator) and so on, which can help to deal with the problem above. Compared with traditional equipment such as generator, new-type controllable devices, represented by the FACTS (Flexible AC Transmission System), have more accurate control ability and respond faster. But they are too expensive to use widely. Therefore, on the basis of the comparison and analysis of the controlling characteristics between traditional control equipment and new-type controllable equipment in both time and space scale, a coordinated optimizing control method within mutil-time-space frame is proposed in this paper to bring both kinds of advantages into play, which can better both control ability and economical efficiency. Firstly, the coordination of different space sizes of grid is studied focused on the fluctuation caused by large-scale wind farms connected to power grid. With generator, FSC (Fixed Series Compensation) and TCSC, the coordination method on two-layer regional power grid vs. its sub grid is studied in detail. The coordination control model is built, the corresponding scheme is promoted, and the conclusion is verified by simulation. By analysis, interface power flow can be controlled by generator and the specific line power flow between two-layer regions can be adjusted by FSC and TCSC. The smaller the interface power flow adjusted by generator, the bigger the control margin of TCSC, instead, the total consumption of generator is much higher. Secondly, the coordination of different time sizes is studied to further the amount of the total consumption of generator and the control margin of TCSC, where the minimum control cost can be acquired. The coordination method on two-layer ultra short-term correction vs. AGC (Automatic Generation Control) is studied with generator, FSC and TCSC. The optimal control model is founded, genetic algorithm is selected to solve the problem, and the conclusion is verified by simulation. Finally, the aforementioned method within multi-time-space scale is analyzed with practical cases, and simulated on PSASP (Power System Analysis Software Package) platform. The correctness and effectiveness are verified by the simulation result. Moreover, this coordinated optimizing control method can contribute to the decrease of control cost and will provide reference to the following studies in this field.

Keywords: FACTS, multi-space-time frame, optimal control, TCSC

Procedia PDF Downloads 261
19424 Psychological Predictors in Performance: An Exploratory Study of a Virtual Ultra-Marathon

Authors: Michael McTighe

Abstract:

Background: The COVID-19 pandemic caused the cancellation of many large-scale in-person sporting events, which led to an increase in the availability of virtual ultra-marathons. This study intended to assess how participation in virtual long distances races relates to levels of physical activity for an extended period of time. Moreover, traditional ultra-marathons are known for being not only physically demanding, but also mentally and emotionally challenging. A second component of this study was to assess how psychological contructs related to emotion regulation and mental toughness predict overall performance in the sport. Method: 83 virtual runners participating in a four-month 1000-kilometer race with the option to exceed 1000 kilometers completed a questionnaire exploring demographics, their performance, and experience in the virtual race. Participants also completed the Difficulties in Emotions Regulation Scale (DERS) and the Sports Mental Toughness Questionnaire (SMTQ). Logistics regressions assessed these constructs’ utility in predicting completion of the 1000-kilometer distance in the time allotted. Multiple regression was employed to predict the total distance traversed during the fourmonth race beyond 1000-kilometers. Result: Neither mental toughness nor emotional regulation was a significant predictor of completing the virtual race’s basic 1000-kilometer finish. However, both variables included together were marginally significant predictors of total miles traversed over the entire event beyond 1000 K (p = .051). Additionally, participation in the event promoted an increase in healthy activity with participants running and walking significantly more in the four months during the event than the four months leading up to it. Discussion: This research intended to explore how psychological constructs relate to performance in a virtual type of endurance event, and how involvement in these types of events related to levels of activity. Higher levels of mental toughness and lower levels in difficulties in emotion regulation were associated with greater performance, and participation in the event promoted an increase in athletic involvement. Future psychological skill training aimed at improving emotion regulation and mental toughness may be used to enhance athletic performance in these sports, and future investigations into these events could explore how general participation may influence these constructs over time. Finally, these results suggest that participation in this logistically accessible, and affordable type of sport can promote greater involvement in healthy activities related to running and walking.

Keywords: virtual races, emotion regulation, mental toughness, ultra-marathon, predictors in performance

Procedia PDF Downloads 90
19423 Bioinformatic Approaches in Population Genetics and Phylogenetic Studies

Authors: Masoud Sheidai

Abstract:

Biologists with a special field of population genetics and phylogeny have different research tasks such as populations’ genetic variability and divergence, species relatedness, the evolution of genetic and morphological characters, and identification of DNA SNPs with adaptive potential. To tackle these problems and reach a concise conclusion, they must use the proper and efficient statistical and bioinformatic methods as well as suitable genetic and morphological characteristics. In recent years application of different bioinformatic and statistical methods, which are based on various well-documented assumptions, are the proper analytical tools in the hands of researchers. The species delineation is usually carried out with the use of different clustering methods like K-means clustering based on proper distance measures according to the studied features of organisms. A well-defined species are assumed to be separated from the other taxa by molecular barcodes. The species relationships are studied by using molecular markers, which are analyzed by different analytical methods like multidimensional scaling (MDS) and principal coordinate analysis (PCoA). The species population structuring and genetic divergence are usually investigated by PCoA and PCA methods and a network diagram. These are based on bootstrapping of data. The Association of different genes and DNA sequences to ecological and geographical variables is determined by LFMM (Latent factor mixed model) and redundancy analysis (RDA), which are based on Bayesian and distance methods. Molecular and morphological differentiating characters in the studied species may be identified by linear discriminant analysis (DA) and discriminant analysis of principal components (DAPC). We shall illustrate these methods and related conclusions by giving examples from different edible and medicinal plant species.

Keywords: GWAS analysis, K-Means clustering, LFMM, multidimensional scaling, redundancy analysis

Procedia PDF Downloads 118
19422 Proposal of Design Method in the Semi-Acausal System Model

Authors: Shigeyuki Haruyama, Ken Kaminishi, Junji Kaneko, Tadayuki Kyoutani, Siti Ruhana Omar, Oke Oktavianty

Abstract:

This study is used as a definition method to the value and function in manufacturing sector. In concurrence of discussion about present condition of modeling method, until now definition of 1D-CAE is ambiguity and not conceptual. Across all the physics fields, those methods are defined with the formulation of differential algebraic equation which only applied time derivation and simulation. At the same time, we propose semi-acausal modeling concept and differential algebraic equation method as a newly modeling method which the efficiency has been verified through the comparison of numerical analysis result between the semi-acausal modeling calculation and FEM theory calculation.

Keywords: system model, physical models, empirical models, conservation law, differential algebraic equation, object-oriented

Procedia PDF Downloads 479
19421 A Unified Ghost Solid Method for the Elastic Solid-Solid Interface

Authors: Abouzar Kaboudian, Boo Cheong Khoo

Abstract:

The Ghost Solid Method (GSM) based algorithms have been extensively used for numerical calculation of wave propagation in the limit of abrupt changes in materials. In this work, we present a unified version of the GSMs that can be successfully applied to both abrupt as well as smooth changes of the material properties in a medium. The application of this method enables us to use the previously-matured numerical algorithms which were developed to be applied to homogeneous mediums, with only minor modifications. This method is developed for one-dimensional settings and its extension to multi-dimensions is briefly discussed. Various numerical experiments are presented to show the applicability of this unified GSM to wave propagation problems in sharply as well as smoothly varying mediums.

Keywords: elastic solid, functionally graded material, ghost solid method, solid-solid interaction

Procedia PDF Downloads 411
19420 Global Stability Of Nonlinear Itô Equations And N. V. Azbelev's W-method

Authors: Arcady Ponosov., Ramazan Kadiev

Abstract:

The work studies the global moment stability of solutions of systems of nonlinear differential Itô equations with delays. A modified regularization method (W-method) for the analysis of various types of stability of such systems, based on the choice of the auxiliaryequations and applications of the theory of positive invertible matrices, is proposed and justified. Development of this method for deterministic functional differential equations is due to N.V. Azbelev and his students. Sufficient conditions for the moment stability of solutions in terms of the coefficients for sufficiently general as well as specific classes of Itô equations are given.

Keywords: asymptotic stability, delay equations, operator methods, stochastic noise

Procedia PDF Downloads 218
19419 Differential Transform Method: Some Important Examples

Authors: M. Jamil Amir, Rabia Iqbal, M. Yaseen

Abstract:

In this paper, we solve some differential equations analytically by using differential transform method. For this purpose, we consider four models of Laplace equation with two Dirichlet and two Neumann boundary conditions and K(2,2) equation and obtain the corresponding exact solutions. The obtained results show the simplicity of the method and massive reduction in calculations when one compares it with other iterative methods, available in literature. It is worth mentioning that here only a few number of iterations are required to reach the closed form solutions as series expansions of some known functions.

Keywords: differential transform method, laplace equation, Dirichlet boundary conditions, Neumann boundary conditions

Procedia PDF Downloads 530
19418 Machine Learning Approach for Lateralization of Temporal Lobe Epilepsy

Authors: Samira-Sadat JamaliDinan, Haidar Almohri, Mohammad-Reza Nazem-Zadeh

Abstract:

Lateralization of temporal lobe epilepsy (TLE) is very important for positive surgical outcomes. We propose a machine learning framework to ultimately identify the epileptogenic hemisphere for temporal lobe epilepsy (TLE) cases using magnetoencephalography (MEG) coherence source imaging (CSI) and diffusion tensor imaging (DTI). Unlike most studies that use classification algorithms, we propose an effective clustering approach to distinguish between normal and TLE cases. We apply the famous Minkowski weighted K-Means (MWK-Means) technique as the clustering framework. To overcome the problem of poor initialization of K-Means, we use particle swarm optimization (PSO) to effectively select the initial centroids of clusters prior to applying MWK-Means. We demonstrate that compared to K-means and MWK-means independently, this approach is able to improve the result of a benchmark data set.

Keywords: temporal lobe epilepsy, machine learning, clustering, magnetoencephalography

Procedia PDF Downloads 147
19417 Genetic Diversity Analysis in Ecological Populations of Persian Walnut

Authors: Masoud Sheidai, Fahimeh Koohdar, Hashem Sharifi

Abstract:

Juglans regia (L.) commonly known as Persian walnut of the genus Juglans L. (Juglandaceae) is one of the most important cultivated plant species due to its high-quality wood and edible nuts. The genetic diversity analysis is essential for conservation and management of tree species. Persian walnut is native from South-Eastern Europe to North-Western China through Tibet, Nepal, Northern India, Pakistan, and Iran. The species like Persian walnut, which has a wide range of geographical distribution, should harbor extensive genetic variability to adapt to environmental fluctuations they face. We aimed to study the population genetic structure of seven Persian walnut populations including three wild and four cultivated populations by using ISSR (Inter simple sequence repeats) and SRAP (Sequence related amplified polymorphism) molecular markers. We also aimed to compare the genetic variability revealed by ISSR neutral multilocus marker and rDNA ITS sequences. The studied populations differed in morphological features as the samples in each population were clustered together and were separate from the other populations. Three wild populations studied were placed close to each other. The mantel test after 5000 times permutation performed between geographical distance and morphological distance in Persian walnut populations produced significant correlation (r = 0.48, P = 0.002). Therefore, as the populations become farther apart, they become more divergent in morphological features. ISSR analysis produced 47 bands/ loci, while we obtained 15 SRAP bands. Gst and other differentiation statistics determined for these loci revealed that most of the ISSR and SRAP loci have very good discrimination power and can differentiate the studied populations. AMOVA performed for these loci produced a significant difference (< 0.05) supporting the above-said result. AMOVA produced significant genetic difference based on ISSR data among the studied populations (PhiPT = 0.52, P = 0.001). AMOVA revealed that 53% of the total variability is due to among population genetic difference, while 47% is due to within population genetic variability. The results showed that both multilocus molecular markers and ITS sequences can differentiate Persian walnut populations. The studied populations differed genetically and showed isolation by distance (IBD). ITS sequence based MP and Bayesian phylogenetic trees revealed that Iranian walnut cultivars form a distinct clade separated from the cultivars studied from elsewhere. Almost all clades obtained have high bootstrap value. The results indicated that a combination of multilpcus and sequencing molecular markers can be used in genetic differentiation of Persian walnut.

Keywords: genetic diversity, population, molecular markers, genetic difference

Procedia PDF Downloads 159
19416 Security in Resource Constraints Network Light Weight Encryption for Z-MAC

Authors: Mona Almansoori, Ahmed Mustafa, Ahmad Elshamy

Abstract:

Wireless sensor network was formed by a combination of nodes, systematically it transmitting the data to their base stations, this transmission data can be easily compromised if the limited processing power and the data consistency from these nodes are kept in mind; there is always a discussion to address the secure data transfer or transmission in actual time. This will present a mechanism to securely transmit the data over a chain of sensor nodes without compromising the throughput of the network by utilizing available battery resources available in the sensor node. Our methodology takes many different advantages of Z-MAC protocol for its efficiency, and it provides a unique key by sharing the mechanism using neighbor node MAC address. We present a light weighted data integrity layer which is embedded in the Z-MAC protocol to prove that our protocol performs well than Z-MAC when we introduce the different attack scenarios.

Keywords: hybrid MAC protocol, data integrity, lightweight encryption, neighbor based key sharing, sensor node dataprocessing, Z-MAC

Procedia PDF Downloads 138