Search results for: Wiener filtering
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 398

Search results for: Wiener filtering

68 Assimilating Multi-Mission Satellites Data into a Hydrological Model

Authors: Mehdi Khaki, Ehsan Forootan, Joseph Awange, Michael Kuhn

Abstract:

Terrestrial water storage, as a source of freshwater, plays an important role in human lives. Hydrological models offer important tools for simulating and predicting water storages at global and regional scales. However, their comparisons with 'reality' are imperfect mainly due to a high level of uncertainty in input data and limitations in accounting for all complex water cycle processes, uncertainties of (unknown) empirical model parameters, as well as the absence of high resolution (both spatially and temporally) data. Data assimilation can mitigate this drawback by incorporating new sets of observations into models. In this effort, we use multi-mission satellite-derived remotely sensed observations to improve the performance of World-Wide Water Resources Assessment system (W3RA) hydrological model for estimating terrestrial water storages. For this purpose, we assimilate total water storage (TWS) data from the Gravity Recovery And Climate Experiment (GRACE) and surface soil moisture data from the Advanced Microwave Scanning Radiometer for the Earth Observing System (AMSR-E) into W3RA. This is done to (i) improve model estimations of water stored in ground and soil moisture, and (ii) assess the impacts of each satellite of data (from GRACE and AMSR-E) and their combination on the final terrestrial water storage estimations. These data are assimilated into W3RA using the Ensemble Square-Root Filter (EnSRF) filtering technique over Mississippi Basin (the United States) and Murray-Darling Basin (Australia) between 2002 and 2013. In order to evaluate the results, independent ground-based groundwater and soil moisture measurements within each basin are used.

Keywords: data assimilation, GRACE, AMSR-E, hydrological model, EnSRF

Procedia PDF Downloads 291
67 Oxide Based Memristor and Its Potential Application in Analog-Digital Electronics

Authors: P. Michael Preetam Raj, Souri Banerjee, Souvik Kundu

Abstract:

Oxide based memristors were fabricated in order to establish its potential applications in analog/digital electronics. BaTiO₃-BiFeO₃ (BT-BFO) was employed as an active material, whereas platinum (Pt) and Nb-doped SrTiO₃ (Nb:STO) were served as a top and bottom electrodes, respectively. Piezoelectric force microscopy (PFM) was utilized to present the ferroelectricity and repeatable polarization inversion in the BT-BFO, demonstrating its effectiveness for resistive switching. The fabricated memristors exhibited excellent electrical characteristics, such as hysteresis current-voltage (I-V), high on/off ratio, high retention time, cyclic endurance, and low operating voltages. The band-alignment between the active material BT-BFO and the substrate Nb:STO was experimentally investigated using X-Ray photoelectron spectroscopy, and it attributed to staggered heterojunction alignment. An energy band diagram was proposed in order to understand the electrical transport in BT-BFO/Nb:STO heterojunction. It was identified that the I-V curves of these memristors have several discontinuities. Curve fitting technique was utilized to analyse the I-V characteristic, and the obtained I-V equations were found to be parabolic. Utilizing this analysis, a non-linear BT-BFO memristors equivalent circuit model was developed. Interestingly, the obtained equivalent circuit of the BT-BFO memristors mimics the identical electrical performance, those obtained in the fabricated devices. Based on the developed equivalent circuit, a finite state machine (FSM) design was proposed. Efforts were devoted to fabricate the same FSM, and the results were well matched with those in the simulated FSM devices. Its multilevel noise filtering and immunity to external noise characteristics were also studied. Further, the feature of variable negative resistance was established by controlling the current through the memristor.

Keywords: band alignment, finite state machine, polarization inversion, resistive switching

Procedia PDF Downloads 135
66 Environmental Impact Assessment in Mining Regions with Remote Sensing

Authors: Carla Palencia-Aguilar

Abstract:

Calculations of Net Carbon Balance can be obtained by means of Net Biome Productivity (NBP), Net Ecosystem Productivity (NEP), and Net Primary Production (NPP). The latter is an important component of the biosphere carbon cycle and is easily obtained data from MODIS MOD17A3HGF; however, the results are only available yearly. To overcome data availability, bands 33 to 36 from MODIS MYD021KM (obtained on a daily basis) were analyzed and compared with NPP data from the years 2000 to 2021 in 7 sites where surface mining takes place in the Colombian territory. Coal, Gold, Iron, and Limestone were the minerals of interest. Scales and Units as well as thermal anomalies, were considered for net carbon balance per location. The NPP time series from the satellite images were filtered by using two Matlab filters: First order and Discrete Transfer. After filtering the NPP time series, comparing the graph results from the satellite’s image value, and running a linear regression, the results showed R2 from 0,72 to 0,85. To establish comparable units among NPP and bands 33 to 36, the Greenhouse Gas Equivalencies Calculator by EPA was used. The comparison was established in two ways: one by the sum of all the data per point per year and the other by the average of 46 weeks and finding the percentage that the value represented with respect to NPP. The former underestimated the total CO2 emissions. The results also showed that coal and gold mining in the last 22 years had less CO2 emissions than limestone, with an average per year of 143 kton CO2 eq for gold, 152 kton CO2 eq for coal, and 287 kton CO2 eq for iron. Limestone emissions varied from 206 to 441 kton CO2 eq. The maximum emission values from unfiltered data correspond to 165 kton CO2 eq. for gold, 188 kton CO2 eq. for coal, and 310 kton CO2 eq. for iron and limestone, varying from 231 to 490 kton CO2 eq. If the most pollutant limestone site improves its production technology, limestone could count with a maximum of 318 kton CO2 eq emissions per year, a value very similar respect to iron. The importance of gathering data is to establish benchmarks in order to attain 2050’s zero emissions goal.

Keywords: carbon dioxide, NPP, MODIS, MINING

Procedia PDF Downloads 105
65 Decoding Kinematic Characteristics of Finger Movement from Electrocorticography Using Classical Methods and Deep Convolutional Neural Networks

Authors: Ksenia Volkova, Artur Petrosyan, Ignatii Dubyshkin, Alexei Ossadtchi

Abstract:

Brain-computer interfaces are a growing research field producing many implementations that find use in different fields and are used for research and practical purposes. Despite the popularity of the implementations using non-invasive neuroimaging methods, radical improvement of the state channel bandwidth and, thus, decoding accuracy is only possible by using invasive techniques. Electrocorticography (ECoG) is a minimally invasive neuroimaging method that provides highly informative brain activity signals, effective analysis of which requires the use of machine learning methods that are able to learn representations of complex patterns. Deep learning is a family of machine learning algorithms that allow learning representations of data with multiple levels of abstraction. This study explores the potential of deep learning approaches for ECoG processing, decoding movement intentions and the perception of proprioceptive information. To obtain synchronous recording of kinematic movement characteristics and corresponding electrical brain activity, a series of experiments were carried out, during which subjects performed finger movements at their own pace. Finger movements were recorded with a three-axis accelerometer, while ECoG was synchronously registered from the electrode strips that were implanted over the contralateral sensorimotor cortex. Then, multichannel ECoG signals were used to track finger movement trajectory characterized by accelerometer signal. This process was carried out both causally and non-causally, using different position of the ECoG data segment with respect to the accelerometer data stream. The recorded data was split into training and testing sets, containing continuous non-overlapping fragments of the multichannel ECoG. A deep convolutional neural network was implemented and trained, using 1-second segments of ECoG data from the training dataset as input. To assess the decoding accuracy, correlation coefficient r between the output of the model and the accelerometer readings was computed. After optimization of hyperparameters and training, the deep learning model allowed reasonably accurate causal decoding of finger movement with correlation coefficient r = 0.8. In contrast, the classical Wiener-filter like approach was able to achieve only 0.56 in the causal decoding mode. In the noncausal case, the traditional approach reached the accuracy of r = 0.69, which may be due to the presence of additional proprioceptive information. This result demonstrates that the deep neural network was able to effectively find a representation of the complex top-down information related to the actual movement rather than proprioception. The sensitivity analysis shows physiologically plausible pictures of the extent to which individual features (channel, wavelet subband) are utilized during the decoding procedure. In conclusion, the results of this study have demonstrated that a combination of a minimally invasive neuroimaging technique such as ECoG and advanced machine learning approaches allows decoding motion with high accuracy. Such setup provides means for control of devices with a large number of degrees of freedom as well as exploratory studies of the complex neural processes underlying movement execution.

Keywords: brain-computer interface, deep learning, ECoG, movement decoding, sensorimotor cortex

Procedia PDF Downloads 178
64 An Improved Two-dimensional Ordered Statistical Constant False Alarm Detection

Authors: Weihao Wang, Zhulin Zong

Abstract:

Two-dimensional ordered statistical constant false alarm detection is a widely used method for detecting weak target signals in radar signal processing applications. The method is based on analyzing the statistical characteristics of the noise and clutter present in the radar signal and then using this information to set an appropriate detection threshold. In this approach, the reference cell of the unit to be detected is divided into several reference subunits. These subunits are used to estimate the noise level and adjust the detection threshold, with the aim of minimizing the false alarm rate. By using an ordered statistical approach, the method is able to effectively suppress the influence of clutter and noise, resulting in a low false alarm rate. The detection process involves a number of steps, including filtering the input radar signal to remove any noise or clutter, estimating the noise level based on the statistical characteristics of the reference subunits, and finally, setting the detection threshold based on the estimated noise level. One of the main advantages of two-dimensional ordered statistical constant false alarm detection is its ability to detect weak target signals in the presence of strong clutter and noise. This is achieved by carefully analyzing the statistical properties of the signal and using an ordered statistical approach to estimate the noise level and adjust the detection threshold. In conclusion, two-dimensional ordered statistical constant false alarm detection is a powerful technique for detecting weak target signals in radar signal processing applications. By dividing the reference cell into several subunits and using an ordered statistical approach to estimate the noise level and adjust the detection threshold, this method is able to effectively suppress the influence of clutter and noise and maintain a low false alarm rate.

Keywords: two-dimensional, ordered statistical, constant false alarm, detection, weak target signals

Procedia PDF Downloads 79
63 GC-MS Analysis of Bioactive Compounds in the Ethanolic Extract of Nest Material of Mud Wasp, Sceliphron caementarium

Authors: P. Susheela, Mary Rosaline, R. Radha

Abstract:

This research was designed to determine the bioactive compounds present in the nest samples of the mud dauber wasp, Sceliophron caementarium. Insects and insect-based products have been used for the treatment of various ailments from a very long time. It has been found that all over the world including the western societies and the indigenous populations, the usage of insect-based medicine plays an important role in various healing practices and magic rituals. Studies on the therapeutic usage of insects are negligible when compared to plants, the. In the present scenario, it is important to explore bioactive compounds from natural sources rather than depending on synthetic drugs that have adverse effects on human body. Keeping this in view, an attempt was made to analyze and identify bioactive components from the nest sample of the mud dauber wasp, Sceliophron caementarium. The nests of the mud dauber wasp, Sceliophron caementarium were collected from Coimbatore, Tamil Nadu, India. The nest sample was extracted with ethanol for 6-8 hours using Soxhlet apparatus. The final residue was obtained by filtering the extract through Whatman filter paper No.41. The GCMS analysis of the nest sample was performed using Perkin Elmer Elite - 5 capillary column. The resultant compounds were compared with the database of National Institute Standard and Technology (NIST), WILEY8, FAME. The GC-MS analysis of the concentrated ethanol extract revealed the presence of eight constituents like Methylene chloride, Eicosanoic acid, 1, 1’:3’, 1’’-Terphenyl, 5'-Phenyl, Di-N-Decylsulfone, 1, 2-Bis (Trimethylsilyl) Benzene, Androstane-11, 17-Dione, 3-[(Trimethylsilyl) Oxy]-, 17-[O-(Phenylmethyl) O. Most of the identified compounds were reported as having biological activities viz. anti-inflammatory, antibacterial and antifungal properties that can be of pharmaceutical importance and further study of these isolated compounds may prove their medicinal importance in future.

Keywords: Sceliophron caementarium, Gas chromatography-mass spectrometry, ethanol extract, bioactive compounds

Procedia PDF Downloads 296
62 Analysis of Biomarkers Intractable Epileptogenic Brain Networks with Independent Component Analysis and Deep Learning Algorithms: A Comprehensive Framework for Scalable Seizure Prediction with Unimodal Neuroimaging Data in Pediatric Patients

Authors: Bliss Singhal

Abstract:

Epilepsy is a prevalent neurological disorder affecting approximately 50 million individuals worldwide and 1.2 million Americans. There exist millions of pediatric patients with intractable epilepsy, a condition in which seizures fail to come under control. The occurrence of seizures can result in physical injury, disorientation, unconsciousness, and additional symptoms that could impede children's ability to participate in everyday tasks. Predicting seizures can help parents and healthcare providers take precautions, prevent risky situations, and mentally prepare children to minimize anxiety and nervousness associated with the uncertainty of a seizure. This research proposes a comprehensive framework to predict seizures in pediatric patients by evaluating machine learning algorithms on unimodal neuroimaging data consisting of electroencephalogram signals. The bandpass filtering and independent component analysis proved to be effective in reducing the noise and artifacts from the dataset. Various machine learning algorithms’ performance is evaluated on important metrics such as accuracy, precision, specificity, sensitivity, F1 score and MCC. The results show that the deep learning algorithms are more successful in predicting seizures than logistic Regression, and k nearest neighbors. The recurrent neural network (RNN) gave the highest precision and F1 Score, long short-term memory (LSTM) outperformed RNN in accuracy and convolutional neural network (CNN) resulted in the highest Specificity. This research has significant implications for healthcare providers in proactively managing seizure occurrence in pediatric patients, potentially transforming clinical practices, and improving pediatric care.

Keywords: intractable epilepsy, seizure, deep learning, prediction, electroencephalogram channels

Procedia PDF Downloads 85
61 A Single Feature Probability-Object Based Image Analysis for Assessing Urban Landcover Change: A Case Study of Muscat Governorate in Oman

Authors: Salim H. Al Salmani, Kevin Tansey, Mohammed S. Ozigis

Abstract:

The study of the growth of built-up areas and settlement expansion is a major exercise that city managers seek to undertake to establish previous and current developmental trends. This is to ensure that there is an equal match of settlement expansion needs to the appropriate levels of services and infrastructure required. This research aims at demonstrating the potential of satellite image processing technique, harnessing the utility of single feature probability-object based image analysis technique in assessing the urban growth dynamics of the Muscat Governorate in Oman for the period 1990, 2002 and 2013. This need is fueled by the continuous expansion of the Muscat Governorate beyond predicted levels of infrastructural provision. Landsat Images of the years 1990, 2002 and 2013 were downloaded and preprocessed to forestall appropriate radiometric and geometric standards. A novel approach of probability filtering of the target feature segment was implemented to derive the spatial extent of the final Built-Up Area of the Muscat governorate for the three years period. This however proved to be a useful technique as high accuracy assessment results of 55%, 70%, and 71% were recorded for the Urban Landcover of 1990, 2002 and 2013 respectively. Furthermore, the Normalized Differential Built – Up Index for the various images were derived and used to consolidate the results of the SFP-OBIA through a linear regression model and visual comparison. The result obtained showed various hotspots where urbanization have sporadically taken place. Specifically, settlement in the districts (Wilayat) of AL-Amarat, Muscat, and Qurayyat experienced tremendous change between 1990 and 2002, while the districts (Wilayat) of AL-Seeb, Bawshar, and Muttrah experienced more sporadic changes between 2002 and 2013.

Keywords: urban growth, single feature probability, object based image analysis, landcover change

Procedia PDF Downloads 275
60 The Observable Method for the Regularization of Shock-Interface Interactions

Authors: Teng Li, Kamran Mohseni

Abstract:

This paper presents an inviscid regularization technique that is capable of regularizing the shocks and sharp interfaces simultaneously in the shock-interface interaction simulations. The direct numerical simulation of flows involving shocks has been investigated for many years and a lot of numerical methods were developed to capture the shocks. However, most of these methods rely on the numerical dissipation to regularize the shocks. Moreover, in high Reynolds number flows, the nonlinear terms in hyperbolic Partial Differential Equations (PDE) dominates, constantly generating small scale features. This makes direct numerical simulation of shocks even harder. The same difficulty happens in two-phase flow with sharp interfaces where the nonlinear terms in the governing equations keep sharpening the interfaces to discontinuities. The main idea of the proposed technique is to average out the small scales that is below the resolution (observable scale) of the computational grid by filtering the convective velocity in the nonlinear terms in the governing PDE. This technique is named “observable method” and it results in a set of hyperbolic equations called observable equations, namely, observable Navier-Stokes or Euler equations. The observable method has been applied to the flow simulations involving shocks, turbulence, and two-phase flows, and the results are promising. In the current paper, the observable method is examined on the performance of regularizing shocks and interfaces at the same time in shock-interface interaction problems. Bubble-shock interactions and Richtmyer-Meshkov instability are particularly chosen to be studied. Observable Euler equations will be numerically solved with pseudo-spectral discretization in space and third order Total Variation Diminishing (TVD) Runge Kutta method in time. Results are presented and compared with existing publications. The interface acceleration and deformation and shock reflection are particularly examined.

Keywords: compressible flow simulation, inviscid regularization, Richtmyer-Meshkov instability, shock-bubble interactions.

Procedia PDF Downloads 349
59 Amrita Bose-Einstein Condensate Solution Formed by Gold Nanoparticles Laser Fusion and Atmospheric Water Generation

Authors: Montree Bunruanses, Preecha Yupapin

Abstract:

In this work, the quantum material called Amrita (elixir) is made from top-down gold into nanometer particles by fusing 99% gold with a laser and mixing it with drinking water using the atmospheric water (AWG) production system, which is made of water with air. The high energy laser power destroyed the four natural force bindings from gravity-weak-electromagnetic and strong coupling forces, where finally it was the purified Bose-Einstein condensate (BEC) states. With this method, gold atoms in the form of spherical single crystals with a diameter of 30-50 nanometers are obtained and used. They were modulated (activated) with a frequency generator into various matrix structures mixed with AWG water to be used in the upstream conversion (quantum reversible) process, which can be applied on humans both internally or externally by drinking or applying on the treated surfaces. Doing both space (body) and time (mind) will go back to the origin and start again from the coupling of space-time on both sides of time at fusion (strong coupling force) and push out (Big Bang) at the equilibrium point (singularity) occurs as strings and DNA with neutrinos as coupling energy. There is no distortion (purification), which is the point where time and space have not yet been determined, and there is infinite energy. Therefore, the upstream conversion is performed. It is reforming DNA to make it be purified. The use of Amrita is a method used for people who cannot meditate (quantum meditation). Various cases were applied, where the results show that the Amrita can make the body and the mind return to their pure origins and begin the downstream process with the Big Bang movement, quantum communication in all dimensions, DNA reformation, frequency filtering, crystal body forming, broadband quantum communication networks, black hole forming, quantum consciousness, body and mind healing, etc.

Keywords: quantum materials, quantum meditation, quantum reversible, Bose-Einstein condensate

Procedia PDF Downloads 78
58 Eco-Friendly Softener Extracted from Ricinus communis (Castor) Seeds for Organic Cotton Fabric

Authors: Fisaha Asmelash

Abstract:

The processing of textiles to achieve a desired handle is a crucial aspect of finishing technology. Softeners can enhance the properties of textiles, such as softness, smoothness, elasticity, hydrophilicity, antistatic properties, and soil release properties, depending on the chemical nature used. However, human skin is sensitive to rough textiles, making softeners increasingly important. Although synthetic softeners are available, they are often expensive and can cause allergic reactions on human skin. This paper aims to extract a natural softener from Ricinus communis and produce an eco-friendly and user-friendly alternative due to its 100% herbal and organic nature. Crushed Ricinus communis seeds were soaked in a mechanical oil extractor for one hour with a 100g cotton fabric sample. The defatted cake or residue obtained after the extraction of oil from the seeds, also known as Ricinus communis meal, was obtained by filtering the raffinate and then dried at 1030c for four hours before being stored under laboratory conditions for the softening process. The softener was applied directly to 100% cotton fabric using the padding process, and the fabric was tested for stiffness, crease recovery, and drape ability. The effect of different concentrations of finishing agents on fabric stiffness, crease recovery, and drape ability was also analyzed. The results showed that the change in fabric softness depends on the concentration of the finish used. As the concentration of the finish was increased, there was a decrease in bending length and drape coefficient. Fabrics with a high concentration of softener showed a maximum decrease in drape coefficient and stiffness, comparable to commercial softeners such as silicon. The highest decrease in drape coefficient was found to be comparable with commercial softeners, silicon. Maximum increases in crease recovery were seen in fabrics treated with Ricinus communis softener at a concentration of 30gpl. From the results, the extracted softener proved to be effective in the treatment of 100% cotton fabric

Keywords: ricinus communis, crease recovery, drapability, softeners, stiffness

Procedia PDF Downloads 92
57 Dependence of the Photoelectric Exponent on the Source Spectrum of the CT

Authors: Rezvan Ravanfar Haghighi, V. C. Vani, Suresh Perumal, Sabyasachi Chatterjee, Pratik Kumar

Abstract:

X-ray attenuation coefficient [µ(E)] of any substance, for energy (E), is a sum of the contributions from the Compton scattering [ μCom(E)] and photoelectric effect [µPh(E)]. In terms of the, electron density (ρe) and the effective atomic number (Zeff) we have µCom(E) is proportional to [(ρe)fKN(E)] while µPh(E) is proportional to [(ρeZeffx)/Ey] with fKN(E) being the Klein-Nishina formula, with x and y being the exponents for photoelectric effect. By taking the sample's HU at two different excitation voltages (V=V1, V2) of the CT machine, we can solve for X=ρe, Y=ρeZeffx from these two independent equations, as is attempted in DECT inversion. Since µCom(E) and µPh(E) are both energy dependent, the coefficients of inversion are also dependent on (a) the source spectrum S(E,V) and (b) the detector efficiency D(E) of the CT machine. In the present paper we tabulate these coefficients of inversion for different practical manifestations of S(E,V) and D(E). The HU(V) values from the CT follow: <µ(V)>=<µw(V)>[1+HU(V)/1000] where the subscript 'w' refers to water and the averaging process <….> accounts for the source spectrum S(E,V) and the detector efficiency D(E). Linearity of μ(E) with respect to X and Y implies that (a) <µ(V)> is a linear combination of X and Y and (b) for inversion, X and Y can be written as linear combinations of two independent observations <µ(V1)>, <µ(V2)> with V1≠V2. These coefficients of inversion would naturally depend upon S(E, V) and D(E). We numerically investigate this dependence for some practical cases, by taking V = 100 , 140 kVp, as are used for cardiological investigations. The S(E,V) are generated by using the Boone-Seibert source spectrum, being superposed on aluminium filters of different thickness lAl with 7mm≤lAl≤12mm and the D(E) is considered to be that of a typical Si[Li] solid state and GdOS scintilator detector. In the values of X and Y, found by using the calculated inversion coefficients, errors are below 2% for data with solutions of glycerol, sucrose and glucose. For low Zeff materials like propionic acid, Zeffx is overestimated by 20% with X being within1%. For high Zeffx materials like KOH the value of Zeffx is underestimated by 22% while the error in X is + 15%. These imply that the source may have additional filtering than the aluminium filter specified by the manufacturer. Also it is found that the difference in the values of the inversion coefficients for the two types of detectors is negligible. The type of the detector does not affect on the DECT inversion algorithm to find the unknown chemical characteristic of the scanned materials. The effect of the source should be considered as an important factor to calculate the coefficients of inversion.

Keywords: attenuation coefficient, computed tomography, photoelectric effect, source spectrum

Procedia PDF Downloads 402
56 One-Step Synthesis and Characterization of Biodegradable ‘Click-Able’ Polyester Polymer for Biomedical Applications

Authors: Wadha Alqahtani

Abstract:

In recent times, polymers have seen a great surge in interest in the field of medicine, particularly chemotherapeutics. One recent innovation is the conversion of polymeric materials into “polymeric nanoparticles”. These nanoparticles can be designed and modified to encapsulate and transport drugs selectively to cancer cells, minimizing collateral damage to surrounding healthy tissues, and improve patient quality of life. In this study, we have synthesized pseudo-branched polyester polymers from bio-based small molecules, including sorbitol, glutaric acid and a propargylic acid derivative to further modify the polymer to make it “click-able" with an azide-modified target ligand. Melt polymerization technique was used for this polymerization reaction, using lipase enzyme catalyst NOVO 435. This reaction was conducted between 90- 95 °C for 72 hours. The polymer samples were collected in 24-hour increments for characterization and to monitor reaction progress. The resulting polymer was purified with the help of methanol dissolving and filtering with filter paper then characterized via NMR, GPC, FTIR, DSC, TGA and MALDI-TOF. Following characterization, these polymers were converted to a polymeric nanoparticle drug delivery system using solvent diffusion method, wherein DiI optical dye and chemotherapeutic drug Taxol can be encapsulated simultaneously. The efficacy of the nanoparticle’s apoptotic effects were analyzed in-vitro by incubation with prostate cancer (LNCaP) and healthy (CHO) cells. MTT assays and fluorescence microscopy were used to assess the cellular uptake and viability of the cells after 24 hours at 37 °C and 5% CO2 atmosphere. Results of the assays and fluorescence imaging confirmed that the nanoparticles were successful in both selectively targeting and inducing apoptosis in 80% of the LNCaP cells within 24 hours without affecting the viability of the CHO cells. These results show the potential of using biodegradable polymers as a vehicle for receptor-specific drug delivery and a potential alternative for traditional systemic chemotherapy. Detailed experimental results will be discussed in the e-poster.

Keywords: chemotherapeutic drug, click chemistry, nanoparticle, prostat cancer

Procedia PDF Downloads 117
55 Automatic Furrow Detection for Precision Agriculture

Authors: Manpreet Kaur, Cheol-Hong Min

Abstract:

The increasing advancement in the robotics equipped with machine vision sensors applied to precision agriculture is a demanding solution for various problems in the agricultural farms. An important issue related with the machine vision system concerns crop row and weed detection. This paper proposes an automatic furrow detection system based on real-time processing for identifying crop rows in maize fields in the presence of weed. This vision system is designed to be installed on the farming vehicles, that is, submitted to gyros, vibration and other undesired movements. The images are captured under image perspective, being affected by above undesired effects. The goal is to identify crop rows for vehicle navigation which includes weed removal, where weeds are identified as plants outside the crop rows. The images quality is affected by different lighting conditions and gaps along the crop rows due to lack of germination and wrong plantation. The proposed image processing method consists of four different processes. First, image segmentation based on HSV (Hue, Saturation, Value) decision tree. The proposed algorithm used HSV color space to discriminate crops, weeds and soil. The region of interest is defined by filtering each of the HSV channels between maximum and minimum threshold values. Then the noises in the images were eliminated by the means of hybrid median filter. Further, mathematical morphological processes, i.e., erosion to remove smaller objects followed by dilation to gradually enlarge the boundaries of regions of foreground pixels was applied. It enhances the image contrast. To accurately detect the position of crop rows, the region of interest is defined by creating a binary mask. The edge detection and Hough transform were applied to detect lines represented in polar coordinates and furrow directions as accumulations on the angle axis in the Hough space. The experimental results show that the method is effective.

Keywords: furrow detection, morphological, HSV, Hough transform

Procedia PDF Downloads 231
54 Filtering Momentum Life Cycles, Price Acceleration Signals and Trend Reversals for Stocks, Credit Derivatives and Bonds

Authors: Periklis Brakatsoulas

Abstract:

Recent empirical research shows a growing interest in investment decision-making under market anomalies that contradict the rational paradigm. Momentum is undoubtedly one of the most robust anomalies in the empirical asset pricing research and remains surprisingly lucrative ever since first documented. Although predominantly phenomena identified across equities, momentum premia are now evident across various asset classes. Yet few many attempts are made so far to provide traders a diversified portfolio of strategies across different assets and markets. Moreover, literature focuses on patterns from past returns rather than mechanisms to signal future price directions prior to momentum runs. The aim of this paper is to develop a diversified portfolio approach to price distortion signals using daily position data on stocks, credit derivatives, and bonds. An algorithm allocates assets periodically, and new investment tactics take over upon price momentum signals and across different ranking groups. We focus on momentum life cycles, trend reversals, and price acceleration signals. The main effort here concentrates on the density, time span and maturity of momentum phenomena to identify consistent patterns over time and measure the predictive power of buy-sell signals generated by these anomalies. To tackle this, we propose a two-stage modelling process. First, we generate forecasts on core macroeconomic drivers. Secondly, satellite models generate market risk forecasts using the core driver projections generated at the first stage as input. Moreover, using a combination of the ARFIMA and FIGARCH models, we examine the dependence of consecutive observations across time and portfolio assets since long memory behavior in volatilities of one market appears to trigger persistent volatility patterns across other markets. We believe that this is the first work that employs evidence of volatility transmissions among derivatives, equities, and bonds to identify momentum life cycle patterns.

Keywords: forecasting, long memory, momentum, returns

Procedia PDF Downloads 103
53 Improve Divers Tracking and Classification in Sonar Images Using Robust Diver Wake Detection Algorithm

Authors: Mohammad Tarek Al Muallim, Ozhan Duzenli, Ceyhun Ilguy

Abstract:

Harbor protection systems are so important. The need for automatic protection systems has increased over the last years. Diver detection active sonar has great significance. It used to detect underwater threats such as divers and autonomous underwater vehicle. To automatically detect such threats the sonar image is processed by algorithms. These algorithms used to detect, track and classify of underwater objects. In this work, divers tracking and classification algorithm is improved be proposing a robust wake detection method. To detect objects the sonar images is normalized then segmented based on fixed threshold. Next, the centroids of the segments are found and clustered based on distance metric. Then to track the objects linear Kalman filter is applied. To reduce effect of noise and creation of false tracks, the Kalman tracker is fine tuned. The tuning is done based on our active sonar specifications. After the tracks are initialed and updated they are subjected to a filtering stage to eliminate the noisy and unstable tracks. Also to eliminate object with a speed out of the diver speed range such as buoys and fast boats. Afterwards the result tracks are subjected to a classification stage to deiced the type of the object been tracked. Here the classification stage is to deice wither if the tracked object is an open circuit diver or a close circuit diver. At the classification stage, a small area around the object is extracted and a novel wake detection method is applied. The morphological features of the object with his wake is extracted. We used support vector machine to find the best classifier. The sonar training images and the test images are collected by ARMELSAN Defense Technologies Company using the portable diver detection sonar ARAS-2023. After applying the algorithm to the test sonar data, we get fine and stable tracks of the divers. The total classification accuracy achieved with the diver type is 97%.

Keywords: harbor protection, diver detection, active sonar, wake detection, diver classification

Procedia PDF Downloads 238
52 Diagrid Structural System

Authors: K. Raghu, Sree Harsha

Abstract:

The interrelationship between the technology and architecture of tall buildings is investigated from the emergence of tall buildings in late 19th century to the present. In the late 19th century early designs of tall buildings recognized the effectiveness of diagonal bracing members in resisting lateral forces. Most of the structural systems deployed for early tall buildings were steel frames with diagonal bracings of various configurations such as X, K, and eccentric. Though the historical research a filtering concept is developed original and remedial technology- through which one can clearly understand inter-relationship between the technical evolution and architectural esthetic and further stylistic transition buildings. Diagonalized grid structures – “diagrids” - have emerged as one of the most innovative and adaptable approaches to structuring buildings in this millennium. Variations of the diagrid system have evolved to the point of making its use non-exclusive to the tall building. Diagrid construction is also to be found in a range of innovative mid-rise steel projects. Contemporary design practice of tall buildings is reviewed and design guidelines are provided for new design trends. Investigated in depths are the behavioral characteristics and design methodology for diagrids structures, which emerge as a new direction in the design of tall buildings with their powerful structural rationale and symbolic architectural expression. Moreover, new technologies for tall building structures and facades are developed for performance enhancement through design integration, and their architectural potentials are explored. By considering the above data the analysis and design of 40-100 storey diagrids steel buildings is carried out using E-TABS software with diagrids of various angle to be found for entire building which will be helpful to reduce the steel requirement for the structure. The present project will have to undertake wind analysis, seismic analysis for lateral loads acting on the structure due to wind loads, earthquake loads, gravity loads. All structural members are designed as per IS 800-2007 considering all load combination. Comparison of results in terms of time period, top storey displacement and inter-storey drift to be carried out. The secondary effect like temperature variations are not considered in the design assuming small variation.

Keywords: diagrid, bracings, structural, building

Procedia PDF Downloads 387
51 Superordinated Control for Increasing Feed-in Capacity and Improving Power Quality in Low Voltage Distribution Grids

Authors: Markus Meyer, Bastian Maucher, Rolf Witzmann

Abstract:

The ever increasing amount of distributed generation in low voltage distribution grids (mainly PV and micro-CHP) can lead to reverse load flows from low to medium/high voltage levels at times of high feed-in. Reverse load flow leads to rising voltages that may even exceed the limits specified in the grid codes. Furthermore, the share of electrical loads connected to low voltage distribution grids via switched power supplies continuously increases. In combination with inverter-based feed-in, this results in high harmonic levels reducing overall power quality. Especially high levels of third-order harmonic currents can lead to neutral conductor overload, which is even more critical if lines with reduced neutral conductor section areas are used. This paper illustrates a possible concept for smart grids in order to increase the feed-in capacity, improve power quality and to ensure safe operation of low voltage distribution grids at all times. The key feature of the concept is a hierarchically structured control strategy that is run on a superordinated controller, which is connected to several distributed grid analyzers and inverters via broad band powerline (BPL). The strategy is devised to ensure both quick response time as well as the technically and economically reasonable use of the available inverters in the grid (PV-inverters, batteries, stepless line voltage regulators). These inverters are provided with standard features for voltage control, e.g. voltage dependent reactive power control. In addition they can receive reactive power set points transmitted by the superordinated controller. To further improve power quality, the inverters are capable of active harmonic filtering, as well as voltage balancing, whereas the latter is primarily done by the stepless line voltage regulators. By additionally connecting the superordinated controller to the control center of the grid operator, supervisory control and data acquisition capabilities for the low voltage distribution grid are enabled, which allows easy monitoring and manual input. Such a low voltage distribution grid can also be used as a virtual power plant.

Keywords: distributed generation, distribution grid, power quality, smart grid, virtual power plant, voltage control

Procedia PDF Downloads 267
50 Removal of Heavy Metal, Dye and Salinity from Industrial Wastewaters by Banana Rachis Cellulose Micro Crystal-Clay Composite

Authors: Mohd Maniruzzaman, Md. Monjurul Alam, Md. Hafezur Rahaman, Anika Amir Mohona

Abstract:

The consumption of water by various industries is increasing day by day, and the wastewaters from them are increasing as well. These wastewaters consist of various kinds of color, dissolved solids, toxic heavy metals, residual chlorine, and other non-degradable organic materials. If these wastewaters are exposed directly to the environment, it will be hazardous for the environment and personal health. So, it is very necessary to treat these wastewaters before exposing into the environment. In this research, we have demonstrated the successful processing and utilization of fully bio-based cellulose micro crystal (CMC) composite for the removal of heavy metals, dyes, and salinity from industrial wastewaters. Banana rachis micro-cellulose were prepared by acid hydrolysis (H₂SO₄) of banana (Musa acuminata L.) rachis fiber, and Bijoypur raw clay were treated by organic solvent tri-ethyl amine. Composites were prepared with varying different composition of banana rachis nano-cellulose and modified Bijoypur (north-east part in Bangladesh) clay. After the successful characterization of cellulose micro crystal (CMC) and modified clay, our targeted filter was fabricated with different composition of cellulose micro crystal and clay in the locally fabricated packing column with 7.5 cm as thickness of composites fraction. Waste-water was collected from local small textile industries containing basic yellow 2 as dye, lead (II) nitrate [Pb(NO₃)₂] and chromium (III) nitrate [Cr(NO₃)₃] as heavy metals and saline water was collected from Khulna to test the efficiency of banana rachis cellulose micro crystal-clay composite for removing the above impurities. The filtering efficiency of wastewater purification was characterized by Fourier transforms infrared spectroscopy (FTIR), X-ray diffraction (X-RD), thermo gravimetric analysis (TGA), atomic absorption spectrometry (AAS), scanning electron microscopy (SEM) analyses. Finally, our all characterizations data are shown with very high expected results for in industrial application of our fabricated filter.

Keywords: banana rachis, bio-based filter, cellulose micro crystal-clay composite, wastewaters, synthetic dyes, heavy metal, water salinity

Procedia PDF Downloads 129
49 A PHREEQC Reactive Transport Simulation for Simply Determining Scaling during Desalination

Authors: Andrew Freiburger, Sergi Molins

Abstract:

Freshwater is a vital resource; yet, the supply of clean freshwater is diminishing as the consequence of melting snow and ice from global warming, pollution from industry, and an increasing demand from human population growth. The unsustainable trajectory of diminishing water resources is projected to jeopardize water security for billions of people in the 21st century. Membrane desalination technologies may resolve the growing discrepancy between supply and demand by filtering arbitrary feed water into a fraction of renewable, clean water and a fraction of highly concentrated brine. The leading hindrance of membrane desalination is fouling, whereby the highly concentrated brine solution encourages micro-organismal colonization and/or the precipitation of occlusive minerals (i.e. scale) upon the membrane surface. Thus, an understanding of brine formation is necessary to mitigate membrane fouling and to develop efficacious desalination technologies that can bolster the supply of available freshwater. This study presents a reactive transport simulation of brine formation and scale deposition during reverse osmosis (RO) desalination. The simulation conceptually represents the RO module as a one-dimensional domain, where feed water directionally enters the domain with a prescribed fluid velocity and is iteratively concentrated in the immobile layer of a dual porosity model. Geochemical PHREEQC code numerically evaluated the conceptual model with parameters for the BW30-400 RO module and for real water feed sources – e.g. the Red and Mediterranean seas, and produced waters from American oil-wells, based upon peer-review data. The presented simulation is computationally simpler, and hence less resource intensive, than the existent and more rigorous simulations of desalination phenomena, like TOUGHREACT. The end-user may readily prepare input files and execute simulations on a personal computer with open source software. The graphical results of fouling-potential and brine characteristics may therefore be particularly useful as the initial tool for screening candidate feed water sources and/or informing the selection of an RO module.

Keywords: desalination, PHREEQC, reactive transport, scaling

Procedia PDF Downloads 136
48 The Regulation of Reputational Information in the Sharing Economy

Authors: Emre Bayamlıoğlu

Abstract:

This paper aims to provide an account of the legal and the regulative aspects of the algorithmic reputation systems with a special emphasis on the sharing economy (i.e., Uber, Airbnb, Lyft) business model. The first section starts with an analysis of the legal and commercial nature of the tripartite relationship among the parties, namely, the host platform, individual sharers/service providers and the consumers/users. The section further examines to what extent an algorithmic system of reputational information could serve as an alternative to legal regulation. Shortcomings are explained and analyzed with specific examples from Airbnb Platform which is a pioneering success in the sharing economy. The following section focuses on the issue of governance and control of the reputational information. The section first analyzes the legal consequences of algorithmic filtering systems to detect undesired comments and how a delicate balance could be struck between the competing interests such as freedom of speech, privacy and the integrity of the commercial reputation. The third section deals with the problem of manipulation by users. Indeed many sharing economy businesses employ certain techniques of data mining and natural language processing to verify consistency of the feedback. Software agents referred as "bots" are employed by the users to "produce" fake reputation values. Such automated techniques are deceptive with significant negative effects for undermining the trust upon which the reputational system is built. The third section is devoted to explore the concerns with regard to data mobility, data ownership, and the privacy. Reputational information provided by the consumers in the form of textual comment may be regarded as a writing which is eligible to copyright protection. Algorithmic reputational systems also contain personal data pertaining both the individual entrepreneurs and the consumers. The final section starts with an overview of the notion of reputation as a communitarian and collective form of referential trust and further provides an evaluation of the above legal arguments from the perspective of public interest in the integrity of reputational information. The paper concludes with certain guidelines and design principles for algorithmic reputation systems, to address the above raised legal implications.

Keywords: sharing economy, design principles of algorithmic regulation, reputational systems, personal data protection, privacy

Procedia PDF Downloads 466
47 Numerical Simulation on Airflow Structure in the Human Upper Respiratory Tract Model

Authors: Xiuguo Zhao, Xudong Ren, Chen Su, Xinxi Xu, Fu Niu, Lingshuai Meng

Abstract:

The respiratory diseases such as asthma, emphysema and bronchitis are connected with the air pollution and the number of these diseases tends to increase, which may attribute to the toxic aerosol deposition in human upper respiratory tract or in the bifurcation of human lung. The therapy of these diseases mostly uses pharmaceuticals in the form of aerosol delivered into the human upper respiratory tract or the lung. Understanding of airflow structures in human upper respiratory tract plays a very important role in the analysis of the “filtering” effect in the pharynx/larynx and for obtaining correct air-particle inlet conditions to the lung. However, numerical simulation based CFD (Computational Fluid Dynamics) technology has its own advantage on studying airflow structure in human upper respiratory tract. In this paper, a representative human upper respiratory tract is built and the CFD technology was used to investigate the air movement characteristic in the human upper respiratory tract. The airflow movement characteristic, the effect of the airflow movement on the shear stress distribution and the probability of the wall injury caused by the shear stress are discussed. Experimentally validated computational fluid-aerosol dynamics results showed the following: the phenomenon of airflow separation appears near the outer wall of the pharynx and the trachea. The high velocity zone is created near the inner wall of the trachea. The airflow splits at the divider and a new boundary layer is generated at the inner wall of the downstream from the bifurcation with the high velocity near the inner wall of the trachea. The maximum velocity appears at the exterior of the boundary layer. The secondary swirls and axial velocity distribution result in the high shear stress acting on the inner wall of the trachea and bifurcation, finally lead to the inner wall injury. The enhancement of breathing intensity enhances the intensity of the shear stress acting on the inner wall of the trachea and the bifurcation. If human keep the high breathing intensity for long time, not only the ability for the transportation and regulation of the gas through the trachea and the bifurcation fall, but also result in the increase of the probability of the wall strain and tissue injury.

Keywords: airflow structure, computational fluid dynamics, human upper respiratory tract, wall shear stress, numerical simulation

Procedia PDF Downloads 247
46 GPU-Based Back-Projection of Synthetic Aperture Radar (SAR) Data onto 3D Reference Voxels

Authors: Joshua Buli, David Pietrowski, Samuel Britton

Abstract:

Processing SAR data usually requires constraints in extent in the Fourier domain as well as approximations and interpolations onto a planar surface to form an exploitable image. This results in a potential loss of data requires several interpolative techniques, and restricts visualization to two-dimensional plane imagery. The data can be interpolated into a ground plane projection, with or without terrain as a component, all to better view SAR data in an image domain comparable to what a human would view, to ease interpretation. An alternate but computationally heavy method to make use of more of the data is the basis of this research. Pre-processing of the SAR data is completed first (matched-filtering, motion compensation, etc.), the data is then range compressed, and lastly, the contribution from each pulse is determined for each specific point in space by searching the time history data for the reflectivity values for each pulse summed over the entire collection. This results in a per-3D-point reflectivity using the entire collection domain. New advances in GPU processing have finally allowed this rapid projection of acquired SAR data onto any desired reference surface (called backprojection). Mathematically, the computations are fast and easy to implement, despite limitations in SAR phase history data size and 3D-point cloud size. Backprojection processing algorithms are embarrassingly parallel since each 3D point in the scene has the same reflectivity calculation applied for all pulses, independent of all other 3D points and pulse data under consideration. Therefore, given the simplicity of the single backprojection calculation, the work can be spread across thousands of GPU threads allowing for accurate reflectivity representation of a scene. Furthermore, because reflectivity values are associated with individual three-dimensional points, a plane is no longer the sole permissible mapping base; a digital elevation model or even a cloud of points (collected from any sensor capable of measuring ground topography) can be used as a basis for the backprojection technique. This technique minimizes any interpolations and modifications of the raw data, maintaining maximum data integrity. This innovative processing will allow for SAR data to be rapidly brought into a common reference frame for immediate exploitation and data fusion with other three-dimensional data and representations.

Keywords: backprojection, data fusion, exploitation, three-dimensional, visualization

Procedia PDF Downloads 86
45 ScRNA-Seq RNA Sequencing-Based Program-Polygenic Risk Scores Associated with Pancreatic Cancer Risks in the UK Biobank Cohort

Authors: Yelin Zhao, Xinxiu Li, Martin Smelik, Oleg Sysoev, Firoj Mahmud, Dina Mansour Aly, Mikael Benson

Abstract:

Background: Early diagnosis of pancreatic cancer is clinically challenging due to vague, or no symptoms, and lack of biomarkers. Polygenic risk score (PRS) scores may provide a valuable tool to assess increased or decreased risk of PC. This study aimed to develop such PRS by filtering genetic variants identified by GWAS using transcriptional programs identified by single-cell RNA sequencing (scRNA-seq). Methods: ScRNA-seq data from 24 pancreatic ductal adenocarcinoma (PDAC) tumor samples and 11 normal pancreases were analyzed to identify differentially expressed genes (DEGs) in in tumor and microenvironment cell types compared to healthy tissues. Pathway analysis showed that the DEGs were enriched for hundreds of significant pathways. These were clustered into 40 “programs” based on gene similarity, using the Jaccard index. Published genetic variants associated with PDAC were mapped to each program to generate program PRSs (pPRSs). These pPRSs, along with five previously published PRSs (PGS000083, PGS000725, PGS000663, PGS000159, and PGS002264), were evaluated in a European-origin population from the UK Biobank, consisting of 1,310 PDAC participants and 407,473 non-pancreatic cancer participants. Stepwise Cox regression analysis was performed to determine associations between pPRSs with the development of PC, with adjustments of sex and principal components of genetic ancestry. Results: The PDAC genetic variants were mapped to 23 programs and were used to generate pPRSs for these programs. Four distinct pPRSs (P1, P6, P11, and P16) and two published PRSs (PGS000663 and PGS002264) were significantly associated with an increased risk of developing PC. Among these, P6 exhibited the greatest hazard ratio (adjusted HR[95% CI] = 1.67[1.14-2.45], p = 0.008). In contrast, P10 and P4 were associated with lower risk of developing PC (adjusted HR[95% CI] = 0.58[0.42-0.81], p = 0.001, and adjusted HR[95% CI] = 0.75[0.59-0.96], p = 0.019). By comparison, two of the five published PRS exhibited an association with PDAC onset with HR (PGS000663: adjusted HR[95% CI] = 1.24[1.14-1.35], p < 0.001 and PGS002264: adjusted HR[95% CI] = 1.14[1.07-1.22], p < 0.001). Conclusion: Compared to published PRSs, scRNA-seq-based pPRSs may be used not only to assess increased but also decreased risk of PDAC.

Keywords: cox regression, pancreatic cancer, polygenic risk score, scRNA-seq, UK biobank

Procedia PDF Downloads 101
44 Towards End-To-End Disease Prediction from Raw Metagenomic Data

Authors: Maxence Queyrel, Edi Prifti, Alexandre Templier, Jean-Daniel Zucker

Abstract:

Analysis of the human microbiome using metagenomic sequencing data has demonstrated high ability in discriminating various human diseases. Raw metagenomic sequencing data require multiple complex and computationally heavy bioinformatics steps prior to data analysis. Such data contain millions of short sequences read from the fragmented DNA sequences and stored as fastq files. Conventional processing pipelines consist in multiple steps including quality control, filtering, alignment of sequences against genomic catalogs (genes, species, taxonomic levels, functional pathways, etc.). These pipelines are complex to use, time consuming and rely on a large number of parameters that often provide variability and impact the estimation of the microbiome elements. Training Deep Neural Networks directly from raw sequencing data is a promising approach to bypass some of the challenges associated with mainstream bioinformatics pipelines. Most of these methods use the concept of word and sentence embeddings that create a meaningful and numerical representation of DNA sequences, while extracting features and reducing the dimensionality of the data. In this paper we present an end-to-end approach that classifies patients into disease groups directly from raw metagenomic reads: metagenome2vec. This approach is composed of four steps (i) generating a vocabulary of k-mers and learning their numerical embeddings; (ii) learning DNA sequence (read) embeddings; (iii) identifying the genome from which the sequence is most likely to come and (iv) training a multiple instance learning classifier which predicts the phenotype based on the vector representation of the raw data. An attention mechanism is applied in the network so that the model can be interpreted, assigning a weight to the influence of the prediction for each genome. Using two public real-life data-sets as well a simulated one, we demonstrated that this original approach reaches high performance, comparable with the state-of-the-art methods applied directly on processed data though mainstream bioinformatics workflows. These results are encouraging for this proof of concept work. We believe that with further dedication, the DNN models have the potential to surpass mainstream bioinformatics workflows in disease classification tasks.

Keywords: deep learning, disease prediction, end-to-end machine learning, metagenomics, multiple instance learning, precision medicine

Procedia PDF Downloads 126
43 A Virtual Set-Up to Evaluate Augmented Reality Effect on Simulated Driving

Authors: Alicia Yanadira Nava Fuentes, Ilse Cervantes Camacho, Amadeo José Argüelles Cruz, Ana María Balboa Verduzco

Abstract:

Augmented reality promises being present in future driving, with its immersive technology let to show directions and maps to identify important places indicating with graphic elements when the car driver requires the information. On the other side, driving is considered a multitasking activity and, for some people, a complex activity where different situations commonly occur that require the immediate attention of the car driver to make decisions that contribute to avoid accidents; therefore, the main aim of the project is the instrumentation of a platform with biometric sensors that allows evaluating the performance in driving vehicles with the influence of augmented reality devices to detect the level of attention in drivers, since it is important to know the effect that it produces. In this study, the physiological sensors EPOC X (EEG), ECG06 PRO and EMG Myoware are joined in the driving test platform with a Logitech G29 steering wheel and the simulation software City Car Driving in which the level of traffic can be controlled, as well as the number of pedestrians that exist within the simulation obtaining a driver interaction in real mode and through a MSP430 microcontroller achieves the acquisition of data for storage. The sensors bring a continuous analog signal in time that needs signal conditioning, at this point, a signal amplifier is incorporated due to the acquired signals having a sensitive range of 1.25 mm/mV, also filtering that consists in eliminating the frequency bands of the signal in order to be interpretative and without noise to convert it from an analog signal into a digital signal to analyze the physiological signals of the drivers, these values are stored in a database. Based on this compilation, we work on the extraction of signal features and implement K-NN (k-nearest neighbor) classification methods and decision trees (unsupervised learning) that enable the study of data for the identification of patterns and determine by classification methods different effects of augmented reality on drivers. The expected results of this project include are a test platform instrumented with biometric sensors for data acquisition during driving and a database with the required variables to determine the effect caused by augmented reality on people in simulated driving.

Keywords: augmented reality, driving, physiological signals, test platform

Procedia PDF Downloads 142
42 Effects of Inadequate Domestic Water Supply on Human Health in Selected Neighbourhoods of Lokoja, Kogi State

Authors: Folorunsho J. O., Umar M. A.

Abstract:

Access to potable water supply in both the rural and urban regions of the world has been neglected, and this has severely affected man and the aesthetics of the natural environment of man. This has further worsened the issue of diseases prevalence. This study considered the effects of inadequate domestic water supply on human health in selected neighbourhoods of Lokoja. The study used descriptive statistics such as relative frequencies, percentages and inferential statistics to analyse the data obtained through the use of structured questionnaire. The results revealed that the females and male constituted 56% and 44% of the respondents respectively; 62% of the respondents married and 32% are unmarried; respondents between ages 31 and 40 years constitute majority of the study population, while respondents with tertiary education constituted 35%, and those with secondary education were 32% of the total respondents. Furthermore, civil servants constituted 40% and unemployed 16% of the total respondents. In terms of monthly income, 40% of the respondents was found to earn between ₦31,000 - 40,000 monthly. On the perception of households on the availability and adequacy of domestic water supply, the study revealed that 64.7% of the respondents have pipe-borne water as their main source of water supply, with only 28.5% out of the 64.7% have pipe-borne water supply daily. On the relationship between water supply characteristics and health status among households, the result shows that 76% of the respondents perceived a strong relationship between water supply and health status. Cumulatively, 67% of the respondents confirm that both the quality and quantity of water supplied play a critical role in determining health status of residents of the study area. The respondents also reported skin diseases (96%), diarrhoea (96%), malaria (91%), cholera (67%), dysentery (67%), and respiratory diseases (67%) as the most perceived and experienced in the area, the disease rate in the prevalence order of malaria (81%), diarrhoea (61%), skin diseases (58%), cholera (34%), dysentery (31%) and respiratory disease (14%) respectively. Finally, the results further showed how households cope with inadequate water supply with 52% of the respondents confirm that they regularly treat their water before it was deployed for domestic uses, while 35%, 26%, 25%, 10% and 4% of the 52% respectively, adopted boiling, addition of alums, filtering with fabrics, chlorination and bleaching as the preferred treatment methods. The study thus recommended policy options that will aggressively launch adequate potable water supply infrastructure in the study area.Keywords: Potable Water, Supply, Human Health, Perception, Chlorination

Keywords: potable water, human health, perception, chlorination

Procedia PDF Downloads 69
41 Effect of Varied Climate, Landuse and Human Activities on the Termite (Isoptera: Insecta) Diversity in Three Different Habitats of Shivamogga District, Karnataka, India

Authors: C. M. Kalleshwaraswamy, G. S. Sathisha, A. S. Vidyashree, H. B. Pavithra

Abstract:

Isoptera are an interesting group of social insects with different castes and division of labour. They are primarily wood-feeders, but also feed on a variety of other organic substrates, such as living trees, leaf litter, soil, lichens and animal faeces. The number of species and their biomass are especially large in tropics. In natural ecosystems, they perform a beneficial role in nutrient cycles by accelerating decomposition. The magnitude and dimension of ecological role played by termites is a function of their diversity, population density, and biomass. Termite assemblage composition has a strong response to habitat disturbance and may be indicative of quantitative changes in the decomposition process. Many previous studies in Western Ghat region of India suggest increased anthropogenic activities that adversely affect the soil macrofauna and diversity. Shivamogga district provides a good opportunity to study the effect of topography, cropping pattern, human disturbance on the termite fauna, thereby acquiring accurate baseline information for conservation decision making. The district has 3 distinct agro-ecological areas such as maidan area, semi-malnad and Western Ghat region. Thus, the district provides a unique opportunity to study the effect of varied climate and anthropogenic disturbance on the termite diversity. The standard protocol of belt transects method developed by Eggleton et al. (1997) was used for sampling termites. Sampling was done at monthly interval from September-2014 to August-2015 in Western Ghats, semi-malnad and maidan habitats. The transect was 100m long and 2m wide and divided into 20 contiguous sections, each 5 x 2m in each habitat. Within each section, all the probable microhabitats of termites were searched, which include dead logs, fallen tree, branch, sticks, leaf litter, vegetation etc.,. All the castes collected were labelled, preserved in 80% alcohol, counted and identified to species level. The number of encounters of a species in the transect was used as an indicator of relative abundance of species. The species diversity, species richness, density were compared in three different habitats such as Western Ghats, semi-malnad and maidan region. The study indicated differences in the species composition in the three different habitats. A total of 15 species were recorded which belonging to four sub family and five genera in three habitats. Eleven species viz., Odontotermes obesus, O. feae, O. anamallensis, O. bellahunisensis, O. adampurensis, O. boveni, Microcerotermes fletcheri, M. pakistanicus, Nasutitermes anamalaiensis, N. indicola, N. krishna were recorded in Western Ghat region. Similarly, 11 species viz., Odontotermes obesus, O. feae, O. anamallensis, O. bellahunisensis, O. hornii, O. bhagwathi, Microtermes obesi, Microcerotermes fletcheri, M. pakistanicus, Nasutitermes indicola and Pericapritermes sp. were recorded in semi-malnad habitat. However, only four species viz., O. obesus, O. feae, Microtemes obesi and Pericapritermes sp. species were recorded in maidan area. Shannon’s wiener diversity index (H) showed that Western Ghats had more species dominance (1.56) followed by semi- malnad (1.36) and lowest in maidan (0.89) habitats. Highest value of simpson’s index (D) was observed in Western Ghats habitat (0.70) with more diverse species followed by semi-malnad (0.58) and lowest in maidan (0.53). Similarly, evenness was highest (0.65) in Western Ghats followed by maidan (0.64) and least in semi-malnad habitat (0.54). Menhinick’s index (Dmn) value was ranging from 0.03 to 0.06 in different habitats in the study area. Highest index was observed in Western Ghats (0.06) followed by semi-malnad (0.05) and lowest in maidan (0.03). The study conclusively demonstrated that Western Ghat had highest species diversity compared to semi-malnad and maidan habitat indicating these two habitats are continuously subjected to anthropogenic disturbances. Efforts are needed to conserve the uncommon species which otherwise may become extinct due to human activities.

Keywords: anthropogenic disturbance, isoptera, termite species diversity, Western ghats

Procedia PDF Downloads 270
40 The Usage of Bridge Estimator for Hegy Seasonal Unit Root Tests

Authors: Huseyin Guler, Cigdem Kosar

Abstract:

The aim of this study is to propose Bridge estimator for seasonal unit root tests. Seasonality is an important factor for many economic time series. Some variables may contain seasonal patterns and forecasts that ignore important seasonal patterns have a high variance. Therefore, it is very important to eliminate seasonality for seasonal macroeconomic data. There are some methods to eliminate the impacts of seasonality in time series. One of them is filtering the data. However, this method leads to undesired consequences in unit root tests, especially if the data is generated by a stochastic seasonal process. Another method to eliminate seasonality is using seasonal dummy variables. Some seasonal patterns may result from stationary seasonal processes, which are modelled using seasonal dummies but if there is a varying and changing seasonal pattern over time, so the seasonal process is non-stationary, deterministic seasonal dummies are inadequate to capture the seasonal process. It is not suitable to use seasonal dummies for modeling such seasonally nonstationary series. Instead of that, it is necessary to take seasonal difference if there are seasonal unit roots in the series. Different alternative methods are proposed in the literature to test seasonal unit roots, such as Dickey, Hazsa, Fuller (DHF) and Hylleberg, Engle, Granger, Yoo (HEGY) tests. HEGY test can be also used to test the seasonal unit root in different frequencies (monthly, quarterly, and semiannual). Another issue in unit root tests is the lag selection. Lagged dependent variables are added to the model in seasonal unit root tests as in the unit root tests to overcome the autocorrelation problem. In this case, it is necessary to choose the lag length and determine any deterministic components (i.e., a constant and trend) first, and then use the proper model to test for seasonal unit roots. However, this two-step procedure might lead size distortions and lack of power in seasonal unit root tests. Recent studies show that Bridge estimators are good in selecting optimal lag length while differentiating nonstationary versus stationary models for nonseasonal data. The advantage of this estimator is the elimination of the two-step nature of conventional unit root tests and this leads a gain in size and power. In this paper, the Bridge estimator is proposed to test seasonal unit roots in a HEGY model. A Monte-Carlo experiment is done to determine the efficiency of this approach and compare the size and power of this method with HEGY test. Since Bridge estimator performs well in model selection, our approach may lead to some gain in terms of size and power over HEGY test.

Keywords: bridge estimators, HEGY test, model selection, seasonal unit root

Procedia PDF Downloads 342
39 Delving into the Concept of Social Capital in the Smart City Research

Authors: Atefe Malekkhani, Lee Beattie, Mohsen Mohammadzadeh

Abstract:

Unprecedented growth of megacities and urban areas all around the world have resulted in numerous risks, concerns, and problems across various aspects of urban life, including environmental, social, and economic domains like climate change, spatial and social inequalities. In this situation, ever-increasing progress of technology has created a hope for urban authorities that the negative effects of various socio-economic and environmental crises can potentially be mitigated with the use of information and communication technologies. The concept of 'smart city' represents an emerging solution to urban challenges arising from increased urbanization using ICTs. However, smart cities are often perceived primarily as technological initiatives and are implemented without considering the social and cultural contexts of cities and the needs of their residents. The implementation of smart city projects and initiatives has the potential to (un)intentionally exacerbate pre-existing social, spatial, and cultural segregation. Investigating the impact of smart city on social capital of people who are users of smart city systems and with governance as policymakers is worth exploring. The importance of inhabitants to the existence and development of smart cities cannot be overlooked. This concept has gained different perspectives in the smart city studies. Reviewing the literature about social capital and smart city show that social capital play three different roles in smart city development. Some research indicates that social capital is a component of a smart city and has embedded in its dimensions, definitions, or strategies, while other ones see it as a social outcome of smart city development and point out that the move to smart cities improves social capital; however, in most cases, it remains an unproven hypothesis. Other studies show that social capital can enhance the functions of smart cities, and the consideration of social capital in planning smart cities should be promoted. Despite the existing theoretical and practical knowledge, there is a significant research gap reviewing the knowledge domain of smart city studies through the lens of social capital. To shed light on this issue, this study aims to explore the domain of existing research in the field of smart city through the lens of social capital. This research will use the 'Preferred Reporting Items for Systematic Reviews and Meta-Analyses' (PRISMA) method to review relevant literature, focusing on the key concepts of 'Smart City' and 'Social Capital'. The studies will be selected Web of Science Core Collection, using a selection process that involves identifying literature sources, screening and filtering studies based on titles, abstracts, and full-text reading.

Keywords: smart city, urban digitalisation, ICT, social capital

Procedia PDF Downloads 15