Search results for: deep neural image models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11280

Search results for: deep neural image models

8460 An Estimating Equation for Survival Data with a Possibly Time-Varying Covariates under a Semiparametric Transformation Models

Authors: Yemane Hailu Fissuh, Zhongzhan Zhang

Abstract:

An estimating equation technique is an alternative method of the widely used maximum likelihood methods, which enables us to ease some complexity due to the complex characteristics of time-varying covariates. In the situations, when both the time-varying covariates and left-truncation are considered in the model, the maximum likelihood estimation procedures become much more burdensome and complex. To ease the complexity, in this study, the modified estimating equations those have been given high attention and considerations in many researchers under semiparametric transformation model was proposed. The purpose of this article was to develop the modified estimating equation under flexible and general class of semiparametric transformation models for left-truncated and right censored survival data with time-varying covariates. Besides the commonly applied Cox proportional hazards model, such kind of problems can be also analyzed with a general class of semiparametric transformation models to estimate the effect of treatment given possibly time-varying covariates on the survival time. The consistency and asymptotic properties of the estimators were intuitively derived via the expectation-maximization (EM) algorithm. The characteristics of the estimators in the finite sample performance for the proposed model were illustrated via simulation studies and Stanford heart transplant real data examples. To sum up the study, the bias for covariates has been adjusted by estimating density function for the truncation time variable. Then the effect of possibly time-varying covariates was evaluated in some special semiparametric transformation models.

Keywords: EM algorithm, estimating equation, semiparametric transformation models, time-to-event outcomes, time varying covariate

Procedia PDF Downloads 143
8459 Imaging of Underground Targets with an Improved Back-Projection Algorithm

Authors: Alireza Akbari, Gelareh Babaee Khou

Abstract:

Ground Penetrating Radar (GPR) is an important nondestructive remote sensing tool that has been used in both military and civilian fields. Recently, GPR imaging has attracted lots of attention in detection of subsurface shallow small targets such as landmines and unexploded ordnance and also imaging behind the wall for security applications. For the monostatic arrangement in the space-time GPR image, a single point target appears as a hyperbolic curve because of the different trip times of the EM wave when the radar moves along a synthetic aperture and collects reflectivity of the subsurface targets. With this hyperbolic curve, the resolution along the synthetic aperture direction shows undesired low resolution features owing to the tails of hyperbola. However, highly accurate information about the size, electromagnetic (EM) reflectivity, and depth of the buried objects is essential in most GPR applications. Therefore hyperbolic curve behavior in the space-time GPR image is often willing to be transformed to a focused pattern showing the object's true location and size together with its EM scattering. The common goal in a typical GPR image is to display the information of the spatial location and the reflectivity of an underground object. Therefore, the main challenge of GPR imaging technique is to devise an image reconstruction algorithm that provides high resolution and good suppression of strong artifacts and noise. In this paper, at first, the standard back-projection (BP) algorithm that was adapted to GPR imaging applications used for the image reconstruction. The standard BP algorithm was limited with against strong noise and a lot of artifacts, which have adverse effects on the following work like detection targets. Thus, an improved BP is based on cross-correlation between the receiving signals proposed for decreasing noises and suppression artifacts. To improve the quality of the results of proposed BP imaging algorithm, a weight factor was designed for each point in region imaging. Compared to a standard BP algorithm scheme, the improved algorithm produces images of higher quality and resolution. This proposed improved BP algorithm was applied on the simulation and the real GPR data and the results showed that the proposed improved BP imaging algorithm has a superior suppression artifacts and produces images with high quality and resolution. In order to quantitatively describe the imaging results on the effect of artifact suppression, focusing parameter was evaluated.

Keywords: algorithm, back-projection, GPR, remote sensing

Procedia PDF Downloads 433
8458 Computational Model for Predicting Effective siRNA Sequences Using Whole Stacking Energy (ΔG) for Gene Silencing

Authors: Reena Murali, David Peter S.

Abstract:

The small interfering RNA (siRNA) alters the regulatory role of mRNA during gene expression by translational inhibition. Recent studies shows that up regulation of mRNA cause serious diseases like Cancer. So designing effective siRNA with good knockdown effects play an important role in gene silencing. Various siRNA design tools had been developed earlier. In this work, we are trying to analyze the existing good scoring second generation siRNA predicting tools and to optimize the efficiency of siRNA prediction by designing a computational model using Artificial Neural Network and whole stacking energy (ΔG), which may help in gene silencing and drug design in cancer therapy. Our model is trained and tested against a large data set of siRNA sequences. Validation of our results is done by finding correlation coefficient of experimental versus observed inhibition efficacy of siRNA. We achieved a correlation coefficient of 0.727 in our previous computational model and we could improve the correlation coefficient up to 0.753 when the threshold of whole tacking energy is greater than or equal to -32.5 kcal/mol.

Keywords: artificial neural network, double stranded RNA, RNA interference, short interfering RNA

Procedia PDF Downloads 514
8457 Analysis of the Contribution of Drude and Brendel Model Terms to the Dielectric Function

Authors: Christopher Mkirema Maghanga, Maurice Mghendi Mwamburi

Abstract:

Parametric modeling provides a means to deeper understand the properties of materials. Drude, Brendel, Lorentz and OJL incorporated in SCOUT® software are some of the models used to study dielectric films. In our work, we utilized Brendel and Drude models to extract the optical constants from spectroscopic data of fabricated undoped and niobium doped titanium oxide thin films. The individual contributions by the two models were studied to establish how they influence the dielectric function. The effect of dopants on their influences was also analyzed. For the undoped films, results indicate minimal contribution from the Drude term due to the dielectric nature of the films. However as doping levels increase, the rise in the concentration of free electrons favors the use of Drude model. Brendel model was confirmed to work well with dielectric films - the undoped titanium Oxide films in our case.

Keywords: modeling, Brendel model, optical constants, titanium oxide, Drude Model

Procedia PDF Downloads 167
8456 First Occurrence of Histopathological Assessment in Gadoid Deep-Fish Phycis blennoides from the Southwestern Mediterranean Sea

Authors: Zakia Alioua, Amira Soumia, Zerouali-Khodja Fatiha

Abstract:

In spite of a wide variety of contaminants such as heavy metals and organic compounds in addition to the importance of extended pollution, the deep-sea and its species are not in haven and being affected through contaminants exposure. This investigation is performed in order to provide data on the presence of pathological changes in the liver and gonads of the greater forkbeard. A total of 998 specimens of the teleost fish Phycis blennoides Brünnich, 1768 ranged from 5,7 to 62,7 cm in total length, were obtained from the commercial fisheries of Algerian ports. The sampling has been carried out monthly from December 2013 to June 2015 and from January to June 2016 caught by trawlers and longlines between 75 and 600 fathoms in the coast of Algeria. Individuals were sexed their gonads, and their livers were removed and processed for light microscopy and one case of atresia was identified. In whole, overall 0,002% of the specimens presented some degree of liver steatosis. For the gastric section, 442 selected stomachs contents were observed looking for parasitic infestation and enumerate 212 nematodes. A prospecting survey for metal contaminant was performed on the liver by atomic absorption spectrophotometry analysis.

Keywords: atresia, coast of Algeria, histopathology, nematode, Phycis blennoides, steatosis

Procedia PDF Downloads 214
8455 Assessment of Sleep Disorders in Moroccan Women with Gynecological Cancer: Cross-Sectional Study

Authors: Amina Aquil, Abdeljalil El Got

Abstract:

Background: Sleep quality is one of the most important indicators related to the quality of life of patients suffering from cancer. Many factors could affect this quality of sleep and then be considered as associated predictors. Methods: The aim of this study was to assess the prevalence of sleep disorders and the associated factors with impaired sleep quality in Moroccan women with gynecological cancer. A cross-sectional study was carried out within the oncology department of the Ibn Rochd University Hospital, Casablanca, on Moroccan women who had undergone radical surgery for gynecological cancer (n=100). Translated and validated Arabic versions of the following international scales were used: Pittsburgh sleep quality index (PSQI), Hospital Anxiety and Depression Scale (HADS), Rosenberg's self-esteem scale (RSES), and Body image scale (BIS). Results: 78% of participants were considered poor sleepers. Most of the patients exhibited very poor subjective quality, low sleep latency, a short period of sleep, and a low rate of usual sleep efficiency. The vast majority of these patients were in poor shape during the day and did not use sleep medication. Waking up in the middle of the night or early in the morning and getting up to use the bathroom were the main reasons for poor sleep quality. PSQI scores were positively correlated with anxiety, depression, body image dissatisfaction, and lower self-esteem (p < 0.001). Conclusion: Sleep quality and its predictors require a systematic evaluation and adequate management to prevent sleep disturbances and mental distress as well as to improve the quality of life of these patients.

Keywords: body image, gynecological cancer, self esteem, sleep quality

Procedia PDF Downloads 105
8454 X-Ray Detector Technology Optimization in Computed Tomography

Authors: Aziz Ikhlef

Abstract:

Most of multi-slices Computed Tomography (CT) scanners are built with detectors composed of scintillator - photodiodes arrays. The photodiodes arrays are mainly based on front-illuminated technology for detectors under 64 slices and on back-illuminated photodiode for systems of 64 slices or more. The designs based on back-illuminated photodiodes were being investigated for CT machines to overcome the challenge of the higher number of runs and connection required in front-illuminated diodes. In backlit diodes, the electronic noise has already been improved because of the reduction of the load capacitance due to the routing reduction. This is translated by a better image quality in low signal application, improving low dose imaging in large patient population. With the fast development of multi-detector-rows CT (MDCT) scanners and the increasing number of examinations, the clinical community has raised significant concerns on radiation dose received by the patient in both medical and regulatory community. In order to reduce individual exposure and in response to the recommendations of the International Commission on Radiological Protection (ICRP) which suggests that all exposures should be kept as low as reasonably achievable (ALARA), every manufacturer is trying to implement strategies and solutions to optimize dose efficiency and image quality based on x-ray emission and scanning parameters. The added demands on the CT detector performance also comes from the increased utilization of spectral CT or dual-energy CT in which projection data of two different tube potentials are collected. One of the approaches utilizes a technology called fast-kVp switching in which the tube voltage is switched between 80 kVp and 140 kVp in fraction of a millisecond. To reduce the cross-contamination of signals, the scintillator based detector temporal response has to be extremely fast to minimize the residual signal from previous samples. In addition, this paper will present an overview of detector technologies and image chain improvement which have been investigated in the last few years to improve the signal-noise ratio and the dose efficiency CT scanners in regular examinations and in energy discrimination techniques. Several parameters of the image chain in general and in the detector technology contribute in the optimization of the final image quality. We will go through the properties of the post-patient collimation to improve the scatter-to-primary ratio, the scintillator material properties such as light output, afterglow, primary speed, crosstalk to improve the spectral imaging, the photodiode design characteristics and the data acquisition system (DAS) to optimize for crosstalk, noise and temporal/spatial resolution.

Keywords: computed tomography, X-ray detector, medical imaging, image quality, artifacts

Procedia PDF Downloads 184
8453 Improving Our Understanding of the in vivo Modelling of Psychotic Disorders

Authors: Zsanett Bahor, Cristina Nunes-Fonseca, Gillian L. Currie, Emily S. Sena, Lindsay D.G. Thomson, Malcolm R. Macleod

Abstract:

Psychosis is ranked as the third most disabling medical condition in the world by the World Health Organization. Despite a substantial amount of research in recent years, available treatments are not universally effective and have a wide range of adverse side effects. Since many clinical drug candidates are identified through in vivo modelling, a deeper understanding of these models, and their strengths and limitations, might help us understand reasons for difficulties in psychosis drug development. To provide an unbiased summary of the preclinical psychosis literature we performed a systematic electronic search of PubMed for publications modelling a psychotic disorder in vivo, identifying 14,721 relevant studies. Double screening of 11,000 publications from this dataset so far established 2403 animal studies of psychosis, with the most common model being schizophrenia (95%). 61% of these models are induced using pharmacological agents. For all the models only 56% of publications test a therapeutic treatment. We propose a systematic review of these studies to assess the prevalence of reporting of measures to reduce risk of bias, and a meta-analysis to assess the internal and external validity of these animal models. Our findings are likely to be relevant to future preclinical studies of psychosis as this generation of strong empirical evidence has the potential to identify weaknesses, areas for improvement and make suggestions on refinement of experimental design. Such a detailed understanding of the data which inform what we think we know will help improve the current attrition rate between bench and bedside in psychosis research.

Keywords: animal models, psychosis, systematic review, schizophrenia

Procedia PDF Downloads 275
8452 Adaptive Envelope Protection Control for the below and above Rated Regions of Wind Turbines

Authors: Mustafa Sahin, İlkay Yavrucuk

Abstract:

This paper presents a wind turbine envelope protection control algorithm that protects Variable Speed Variable Pitch (VSVP) wind turbines from damage during operation throughout their below and above rated regions, i.e. from cut-in to cut-out wind speed. The proposed approach uses a neural network that can adapt to turbines and their operating points. An algorithm monitors instantaneous wind and turbine states, predicts a wind speed that would push the turbine to a pre-defined envelope limit and, when necessary, realizes an avoidance action. Simulations are realized using the MS Bladed Wind Turbine Simulation Model for the NREL 5 MW wind turbine equipped with baseline controllers. In all simulations, through the proposed algorithm, it is observed that the turbine operates safely within the allowable limit throughout the below and above rated regions. Two example cases, adaptations to turbine operating points for the below and above rated regions and protections are investigated in simulations to show the capability of the proposed envelope protection system (EPS) algorithm, which reduces excessive wind turbine loads and expectedly increases the turbine service life.

Keywords: adaptive envelope protection control, limit detection and avoidance, neural networks, ultimate load reduction, wind turbine power control

Procedia PDF Downloads 117
8451 A Metallography Study of Secondary A226 Aluminium Alloy Used in Automotive Industries

Authors: Lenka Hurtalová, Eva Tillová, Mária Chalupová, Juraj Belan, Milan Uhríčik

Abstract:

The secondary alloy A226 is used for many automotive casting produced by mould casting and high pressure die-casting. This alloy has excellent castability, good mechanical properties and cost-effectiveness. Production of primary aluminium alloys belong to heavy source fouling of life environs. The European Union calls for the emission reduction and reduction in energy consumption, therefore, increase production of recycled (secondary) aluminium cast alloys. The contribution is deal with influence of recycling on the quality of the casting made from A226 in automotive industry. The properties of the casting made from secondary aluminium alloys were compared with the required properties of primary aluminium alloys. The effect of recycling on microstructure was observed using combination different analytical techniques (light microscopy upon black-white etching, scanning electron microscopy-SEM upon deep etching and energy dispersive X-ray analysis-EDX). These techniques were used for the identification of the various structure parameters, which was used to compare secondary alloy microstructure with primary alloy microstructure.

Keywords: A226 secondary aluminium alloy, deep etching, mechanical properties, recycling foundry aluminium alloy

Procedia PDF Downloads 523
8450 Transport Emission Inventories and Medical Exposure Modeling: A Missing Link for Urban Health

Authors: Frederik Schulte, Stefan Voß

Abstract:

The adverse effects of air pollution on public health are an increasingly vital problem in planning for urban regions in many parts of the world. The issue is addressed from various angles and by distinct disciplines in research. Epidemiological studies model the relative increase of numerous diseases in response to an increment of different forms of air pollution. A significant share of air pollution in urban regions is related to transport emissions that are often measured and stored in emission inventories. Though, most approaches in transport planning, engineering, and operational design of transport activities are restricted to general emission limits for specific air pollutants and do not consider more nuanced exposure models. We conduct an extensive literature review on exposure models and emission inventories used to study the health impact of transport emissions. Furthermore, we review methods applied in both domains and use emission inventory data of transportation hubs such as ports, airports, and urban traffic for an in-depth analysis of public health impacts deploying medical exposure models. The results reveal specific urban health risks related to transport emissions that may improve urban planning for environmental health by providing insights in actual health effects instead of only referring to general emission limits.

Keywords: emission inventories, exposure models, transport emissions, urban health

Procedia PDF Downloads 373
8449 Removal of Basic Yellow 28 Dye from Aqueous Solutions Using Plastic Wastes

Authors: Nadjib Dahdouh, Samira Amokrane, Elhadj Mekatel, Djamel Nibou

Abstract:

The removal of Basic Yellow 28 (BY28) from aqueous solutions by plastic wastes PMMA was investigated. The characteristics of plastic wastes PMMA were determined by SEM, FTIR and chemical composition analysis. The effects of solution pH, initial Basic Yellow 28 (BY28) concentration C, solid/liquid ratio R, and temperature T were studied in batch experiments. The Freundlich and the Langmuir models have been applied to the adsorption process, and it was found that the equilibrium followed well Langmuir adsorption isotherm. A comparison of kinetic models applied to the adsorption of BY28 on the PMMA was evaluated for the pseudo-first-order and the pseudo-second-order kinetic models. It was found that used models were correlated with the experimental data. Intraparticle diffusion model was also used in these experiments. The thermodynamic parameters namely the enthalpy ∆H°, entropy ∆S° and free energy ∆G° of adsorption of BY28 on PMMA were determined. From the obtained results, the negative values of Gibbs free energy ∆G° indicated the spontaneity of the adsorption of BY28 by PMMA. The negative values of ∆H° revealed the exothermic nature of the process and the negative values of ∆S° suggest the stability of BY28 on the surface of SW PMMA.

Keywords: removal, Waste PMMA, BY28 dye, equilibrium, kinetic study, thermodynamic study

Procedia PDF Downloads 134
8448 Analysis on Prediction Models of TBM Performance and Selection of Optimal Input Parameters

Authors: Hang Lo Lee, Ki Il Song, Hee Hwan Ryu

Abstract:

An accurate prediction of TBM(Tunnel Boring Machine) performance is very difficult for reliable estimation of the construction period and cost in preconstruction stage. For this purpose, the aim of this study is to analyze the evaluation process of various prediction models published since 2000 for TBM performance, and to select the optimal input parameters for the prediction model. A classification system of TBM performance prediction model and applied methodology are proposed in this research. Input and output parameters applied for prediction models are also represented. Based on these results, a statistical analysis is performed using the collected data from shield TBM tunnel in South Korea. By performing a simple regression and residual analysis utilizinFg statistical program, R, the optimal input parameters are selected. These results are expected to be used for development of prediction model of TBM performance.

Keywords: TBM performance prediction model, classification system, simple regression analysis, residual analysis, optimal input parameters

Procedia PDF Downloads 295
8447 Analyzing and Predicting the CL-20 Detonation Reaction Mechanism Based on Artificial Intelligence Algorithm

Authors: Kaining Zhang, Lang Chen, Danyang Liu, Jianying Lu, Kun Yang, Junying Wu

Abstract:

In order to solve the problem of a large amount of simulation and limited simulation scale in the first-principle molecular dynamics simulation of energetic material detonation reaction, we established an artificial intelligence model for analyzing and predicting the detonation reaction mechanism of CL-20 based on the first-principle molecular dynamics simulation of the multiscale shock technique (MSST). We employed principal component analysis to identify the dominant charge features governing molecular reactions. We adopted the K-means clustering algorithm to cluster the reaction paths and screen out the key reactions. We introduced the neural network algorithm to construct the mapping relationship between the charge characteristics of the molecular structure and the key reaction characteristics so as to establish a calculation method for predicting detonation reactions based on the charge characteristics of CL-20 and realize the rapid analysis of the reaction mechanism of energetic materials.

Keywords: energetic material detonation reaction, first-principle molecular dynamics simulation of multiscale shock technique, neural network, CL-20

Procedia PDF Downloads 91
8446 Digital Watermarking Based on Visual Cryptography and Histogram

Authors: R. Rama Kishore, Sunesh

Abstract:

Nowadays, robust and secure watermarking algorithm and its optimization have been need of the hour. A watermarking algorithm is presented to achieve the copy right protection of the owner based on visual cryptography, histogram shape property and entropy. In this, both host image and watermark are preprocessed. Host image is preprocessed by using Butterworth filter, and watermark is with visual cryptography. Applying visual cryptography on water mark generates two shares. One share is used for embedding the watermark, and the other one is used for solving any dispute with the aid of trusted authority. Usage of histogram shape makes the process more robust against geometric and signal processing attacks. The combination of visual cryptography, Butterworth filter, histogram, and entropy can make the algorithm more robust, imperceptible, and copy right protection of the owner.

Keywords: digital watermarking, visual cryptography, histogram, butter worth filter

Procedia PDF Downloads 343
8445 Device Control Using Brain Computer Interface

Authors: P. Neeraj, Anurag Sharma, Harsukhpreet Singh

Abstract:

In current years, Brain-Computer Interface (BCI) scheme based on steady-state Visual Evoked Potential (SSVEP) have earned much consideration. This study tries to evolve an SSVEP based BCI scheme that can regulate any gadget mock-up in two unique positions ON and OFF. In this paper, two distinctive gleam frequencies in low-frequency part were utilized to evoke the SSVEPs and were shown on a Liquid Crystal Display (LCD) screen utilizing Lab View. Two stimuli shading, Yellow, and Blue were utilized to prepare the system in SSVEPs. The Electroencephalogram (EEG) signals recorded from the occipital part. Elements of the brain were separated by utilizing discrete wavelet Transform. A prominent system for multilayer system diverse Neural Network Algorithm (NNA), is utilized to characterize SSVEP signals. During training of the network with diverse calculation Regression plot results demonstrated that when Levenberg-Marquardt preparing calculation was utilized the exactness turns out to be 93.9%, which is superior to another training algorithm.

Keywords: brain computer interface, electroencephalography, steady-state visual evoked potential, wavelet transform, neural network

Procedia PDF Downloads 320
8444 Classifier for Liver Ultrasound Images

Authors: Soumya Sajjan

Abstract:

Liver cancer is the most common cancer disease worldwide in men and women, and is one of the few cancers still on the rise. Liver disease is the 4th leading cause of death. According to new NHS (National Health Service) figures, deaths from liver diseases have reached record levels, rising by 25% in less than a decade; heavy drinking, obesity, and hepatitis are believed to be behind the rise. In this study, we focus on Development of Diagnostic Classifier for Ultrasound liver lesion. Ultrasound (US) Sonography is an easy-to-use and widely popular imaging modality because of its ability to visualize many human soft tissues/organs without any harmful effect. This paper will provide an overview of underlying concepts, along with algorithms for processing of liver ultrasound images Naturaly, Ultrasound liver lesion images are having more spackle noise. Developing classifier for ultrasound liver lesion image is a challenging task. We approach fully automatic machine learning system for developing this classifier. First, we segment the liver image by calculating the textural features from co-occurrence matrix and run length method. For classification, Support Vector Machine is used based on the risk bounds of statistical learning theory. The textural features for different features methods are given as input to the SVM individually. Performance analysis train and test datasets carried out separately using SVM Model. Whenever an ultrasonic liver lesion image is given to the SVM classifier system, the features are calculated, classified, as normal and diseased liver lesion. We hope the result will be helpful to the physician to identify the liver cancer in non-invasive method.

Keywords: segmentation, Support Vector Machine, ultrasound liver lesion, co-occurance Matrix

Procedia PDF Downloads 393
8443 Damage Micromechanisms of Coconut Fibers and Chopped Strand Mats of Coconut Fibers

Authors: Rios A. S., Hild F., Deus E. P., Aimedieu P., Benallal A.

Abstract:

The damage micromechanisms of chopped strand mats manufactured by compression of Brazilian coconut fiber and coconut fibers in different external conditions (chemical treatment) were used in this study. Mechanical analysis testing uniaxial traction were used with Digital Image Correlation (DIC). The images captured during the tensile test in the coconut fibers and coconut fiber mats showed an uncertainty of measurement in order centipixels. The initial modulus (modulus of elasticity) and tensile strength decreased with increasing diameter for the four conditions of coconut fibers. The DIC showed heterogeneous deformation fields for coconut fibers and mats and the displacement fields showed the rupture process of coconut fiber. The determination of poisson’s ratio of the mat was performed through of transverse and longitudinal deformations found in the elastic region.

Keywords: coconut fiber, mechanical behavior, digital image correlation, micromechanism

Procedia PDF Downloads 447
8442 Statistical Assessment of Models for Determination of Soil–Water Characteristic Curves of Sand Soils

Authors: S. J. Matlan, M. Mukhlisin, M. R. Taha

Abstract:

Characterization of the engineering behavior of unsaturated soil is dependent on the soil-water characteristic curve (SWCC), a graphical representation of the relationship between water content or degree of saturation and soil suction. A reasonable description of the SWCC is thus important for the accurate prediction of unsaturated soil parameters. The measurement procedures for determining the SWCC, however, are difficult, expensive, and time-consuming. During the past few decades, researchers have laid a major focus on developing empirical equations for predicting the SWCC, with a large number of empirical models suggested. One of the most crucial questions is how precisely existing equations can represent the SWCC. As different models have different ranges of capability, it is essential to evaluate the precision of the SWCC models used for each particular soil type for better SWCC estimation. It is expected that better estimation of SWCC would be achieved via a thorough statistical analysis of its distribution within a particular soil class. With this in view, a statistical analysis was conducted in order to evaluate the reliability of the SWCC prediction models against laboratory measurement. Optimization techniques were used to obtain the best-fit of the model parameters in four forms of SWCC equation, using laboratory data for relatively coarse-textured (i.e., sandy) soil. The four most prominent SWCCs were evaluated and computed for each sample. The result shows that the Brooks and Corey model is the most consistent in describing the SWCC for sand soil type. The Brooks and Corey model prediction also exhibit compatibility with samples ranging from low to high soil water content in which subjected to the samples that evaluated in this study.

Keywords: soil-water characteristic curve (SWCC), statistical analysis, unsaturated soil, geotechnical engineering

Procedia PDF Downloads 325
8441 Data Poisoning Attacks on Federated Learning and Preventive Measures

Authors: Beulah Rani Inbanathan

Abstract:

In the present era, it is vivid from the numerous outcomes that data privacy is being compromised in various ways. Machine learning is one technology that uses the centralized server, and then data is given as input which is being analyzed by the algorithms present on this mentioned server, and hence outputs are predicted. However, each time the data must be sent by the user as the algorithm will analyze the input data in order to predict the output, which is prone to threats. The solution to overcome this issue is federated learning, where the models alone get updated while the data resides on the local machine and does not get exchanged with the other local models. Nevertheless, even on these local models, there are chances of data poisoning, and it is crystal clear from various experiments done by many people. This paper delves into many ways where data poisoning occurs and the many methods through which it is prevalent that data poisoning still exists. It includes the poisoning attacks on IoT devices, Edge devices, Autoregressive model, and also, on Industrial IoT systems and also, few points on how these could be evadible in order to protect our data which is personal, or sensitive, or harmful when exposed.

Keywords: data poisoning, federated learning, Internet of Things, edge computing

Procedia PDF Downloads 73
8440 Time Organization for Decongesting Urban Mobility: New Methodology Identifying People's Behavior

Authors: Yassamina Berkane, Leila Kloul, Yoann Demoli

Abstract:

Quality of life, environmental impact, congestion of mobility means, and infrastructures remain significant challenges for urban mobility. Solutions like car sharing, spatial redesign, eCommerce, and autonomous vehicles will likely increase the unit veh-km and the density of cars in urban traffic, thus reducing congestion. However, the impact of such solutions is not clear for researchers. Congestion arises from growing populations that must travel greater distances to arrive at similar locations (e.g., workplaces, schools) during the same time frame (e.g., rush hours). This paper first reviews the research and application cases of urban congestion methods through recent years. Rethinking the question of time, it then investigates people’s willingness and flexibility to adapt their arrival and departure times from workplaces. We use neural networks and methods of supervised learning to apply a new methodology for predicting peoples' intentions from their responses in a questionnaire. We created and distributed a questionnaire to more than 50 companies in the Paris suburb. Obtained results illustrate that our methodology can predict peoples' intentions to reschedule their activities (work, study, commerce, etc.).

Keywords: urban mobility, decongestion, machine learning, neural network

Procedia PDF Downloads 177
8439 Retina Registration for Biometrics Based on Characterization of Retinal Feature Points

Authors: Nougrara Zineb

Abstract:

The unique structure of the blood vessels in the retina has been used for biometric identification. The retina blood vessel pattern is a unique pattern in each individual and it is almost impossible to forge that pattern in a false individual. The retina biometrics’ advantages include high distinctiveness, universality, and stability overtime of the blood vessel pattern. Once the creases have been extracted from the images, a registration stage is necessary, since the position of the retinal vessel structure could change between acquisitions due to the movements of the eye. Image registration consists of following steps: Feature detection, feature matching, transform model estimation and image resembling and transformation. In this paper, we present an algorithm of registration; it is based on the characterization of retinal feature points. For experiments, retinal images from the DRIVE database have been tested. The proposed methodology achieves good results for registration in general.

Keywords: fovea, optic disc, registration, retinal images

Procedia PDF Downloads 253
8438 Application of Artificial Intelligence to Schedule Operability of Waterfront Facilities in Macro Tide Dominated Wide Estuarine Harbour

Authors: A. Basu, A. A. Purohit, M. M. Vaidya, M. D. Kudale

Abstract:

Mumbai, being traditionally the epicenter of India's trade and commerce, the existing major ports such as Mumbai and Jawaharlal Nehru Ports (JN) situated in Thane estuary are also developing its waterfront facilities. Various developments over the passage of decades in this region have changed the tidal flux entering/leaving the estuary. The intake at Pir-Pau is facing the problem of shortage of water in view of advancement of shoreline, while jetty near Ulwe faces the problem of ship scheduling due to existence of shallower depths between JN Port and Ulwe Bunder. In order to solve these problems, it is inevitable to have information about tide levels over a long duration by field measurements. However, field measurement is a tedious and costly affair; application of artificial intelligence was used to predict water levels by training the network for the measured tide data for one lunar tidal cycle. The application of two layered feed forward Artificial Neural Network (ANN) with back-propagation training algorithms such as Gradient Descent (GD) and Levenberg-Marquardt (LM) was used to predict the yearly tide levels at waterfront structures namely at Ulwe Bunder and Pir-Pau. The tide data collected at Apollo Bunder, Ulwe, and Vashi for a period of lunar tidal cycle (2013) was used to train, validate and test the neural networks. These trained networks having high co-relation coefficients (R= 0.998) were used to predict the tide at Ulwe, and Vashi for its verification with the measured tide for the year 2000 & 2013. The results indicate that the predicted tide levels by ANN give reasonably accurate estimation of tide. Hence, the trained network is used to predict the yearly tide data (2015) for Ulwe. Subsequently, the yearly tide data (2015) at Pir-Pau was predicted by using the neural network which was trained with the help of measured tide data (2000) of Apollo and Pir-Pau. The analysis of measured data and study reveals that: The measured tidal data at Pir-Pau, Vashi and Ulwe indicate that there is maximum amplification of tide by about 10-20 cm with a phase lag of 10-20 minutes with reference to the tide at Apollo Bunder (Mumbai). LM training algorithm is faster than GD and with increase in number of neurons in hidden layer and the performance of the network increases. The predicted tide levels by ANN at Pir-Pau and Ulwe provides valuable information about the occurrence of high and low water levels to plan the operation of pumping at Pir-Pau and improve ship schedule at Ulwe.

Keywords: artificial neural network, back-propagation, tide data, training algorithm

Procedia PDF Downloads 465
8437 Lean Impact Analysis Assessment Models: Development of a Lean Measurement Structural Model

Authors: Catherine Maware, Olufemi Adetunji

Abstract:

The paper is aimed at developing a model to measure the impact of Lean manufacturing deployment on organizational performance. The model will help industry practitioners to assess the impact of implementing Lean constructs on organizational performance. It will also harmonize the measurement models of Lean performance with the house of Lean that seems to have become the industry standard. The sheer number of measurement models for impact assessment of Lean implementation makes it difficult for new adopters to select an appropriate assessment model or deployment methodology. A literature review is conducted to classify the Lean performance model. Pareto analysis is used to select the Lean constructs for the development of the model. The model is further formalized through the use of Structural Equation Modeling (SEM) in defining the underlying latent structure of a Lean system. An impact assessment measurement model developed can be used to measure Lean performance and can be adopted by different industries.

Keywords: impact measurement model, lean bundles, lean manufacturing, organizational performance

Procedia PDF Downloads 465
8436 Improved Anatomy Teaching by the 3D Slicer Platform

Authors: Ahmedou Moulaye Idriss, Yahya Tfeil

Abstract:

Medical imaging technology has become an indispensable tool in many branches of the biomedical, health area, and research and is vitally important for the training of professionals in these fields. It is not only about the tools, technologies, and knowledge provided but also about the community that this training project proposes. In order to be able to raise the level of anatomy teaching in the medical school of Nouakchott in Mauritania, it is necessary and even urgent to facilitate access to modern technology for African countries. The role of technology as a key driver of justifiable development has long been recognized. Anatomy is an essential discipline for the training of medical students; it is a key element for the training of medical specialists. The quality and results of the work of a young surgeon depend on his better knowledge of anatomical structures. The teaching of anatomy is difficult as the discipline is being neglected by medical students in many academic institutions. However, anatomy remains a vital part of any medical education program. When anatomy is presented in various planes medical students approve of difficulties in understanding. They do not increase their ability to visualize and mentally manipulate 3D structures. They are sometimes not able to correctly identify neighbouring or associated structures. This is the case when they have to make the identification of structures related to the caudate lobe when the liver is moved to different positions. In recent decades, some modern educational tools using digital sources tend to replace old methods. One of the main reasons for this change is the lack of cadavers in laboratories with poorly qualified staff. The emergence of increasingly sophisticated mathematical models, image processing, and visualization tools in biomedical imaging research have enabled sophisticated three-dimensional (3D) representations of anatomical structures. In this paper, we report our current experience in the Faculty of Medicine in Nouakchott Mauritania. One of our main aims is to create a local learning community in the fields of anatomy. The main technological platform used in this project is called 3D Slicer. 3D Slicer platform is an open-source application available for free for viewing, analysis, and interaction with biomedical imaging data. Using the 3D Slicer platform, we created from real medical images anatomical atlases of parts of the human body, including head, thorax, abdomen, liver, and pelvis, upper and lower limbs. Data were collected from several local hospitals and also from the website. We used MRI and CT-Scan imaging data from children and adults. Many different anatomy atlases exist, both in print and digital forms. Anatomy Atlas displays three-dimensional anatomical models, image cross-sections of labelled structures and source radiological imaging, and a text-based hierarchy of structures. Open and free online anatomical atlases developed by our anatomy laboratory team will be available to our students. This will allow pedagogical autonomy and remedy the shortcomings by responding more fully to the objectives of sustainable local development of quality education and good health at the national level. To make this work a reality, our team produced several atlases available in our faculty in the form of research projects.

Keywords: anatomy, education, medical imaging, three dimensional

Procedia PDF Downloads 220
8435 Spatial Time Series Models for Rice and Cassava Yields Based on Bayesian Linear Mixed Models

Authors: Panudet Saengseedam, Nanthachai Kantanantha

Abstract:

This paper proposes a linear mixed model (LMM) with spatial effects to forecast rice and cassava yields in Thailand at the same time. A multivariate conditional autoregressive (MCAR) model is assumed to present the spatial effects. A Bayesian method is used for parameter estimation via Gibbs sampling Markov Chain Monte Carlo (MCMC). The model is applied to the rice and cassava yields monthly data which have been extracted from the Office of Agricultural Economics, Ministry of Agriculture and Cooperatives of Thailand. The results show that the proposed model has better performance in most provinces in both fitting part and validation part compared to the simple exponential smoothing and conditional auto regressive models (CAR) from our previous study.

Keywords: Bayesian method, linear mixed model, multivariate conditional autoregressive model, spatial time series

Procedia PDF Downloads 381
8434 Investigation of New Gait Representations for Improving Gait Recognition

Authors: Chirawat Wattanapanich, Hong Wei

Abstract:

This study presents new gait representations for improving gait recognition accuracy on cross gait appearances, such as normal walking, wearing a coat and carrying a bag. Based on the Gait Energy Image (GEI), two ideas are implemented to generate new gait representations. One is to append lower knee regions to the original GEI, and the other is to apply convolutional operations to the GEI and its variants. A set of new gait representations are created and used for training multi-class Support Vector Machines (SVMs). Tests are conducted on the CASIA dataset B. Various combinations of the gait representations with different convolutional kernel size and different numbers of kernels used in the convolutional processes are examined. Both the entire images as features and reduced dimensional features by Principal Component Analysis (PCA) are tested in gait recognition. Interestingly, both new techniques, appending the lower knee regions to the original GEI and convolutional GEI, can significantly contribute to the performance improvement in the gait recognition. The experimental results have shown that the average recognition rate can be improved from 75.65% to 87.50%.

Keywords: convolutional image, lower knee, gait

Procedia PDF Downloads 189
8433 Oryzanol Recovery from Rice Bran Oil: Adsorption Equilibrium Models Through Kinetics Data Approachments

Authors: A.D. Susanti, W. B. Sediawan, S.K. Wirawan, Budhijanto, Ritmaleni

Abstract:

Oryzanol content in rice bran oil (RBO) naturally has high antioxidant activity. Its reviewed has several health properties and high interested in pharmacy, cosmetics, and nutrition’s. Because of the low concentration of oryzanol in crude RBO (0.9-2.9%) then its need to be further processed for practical usage, such as via adsorption process. In this study, investigation and adjustment of adsorption equilibrium models were conducted through the kinetic data approachments. Mathematical modeling on kinetics of batch adsorption of oryzanol separation from RBO has been set-up and then applied for equilibrium results. The size of adsorbent particles used in this case are usually relatively small then the concentration in the adsorbent is assumed to be not different. Hence, the adsorption rate is controlled by the rate of oryzanol mass transfer from the bulk fluid of RBO to the surface of silica gel. In this approachments, the rate of mass transfer is assumed to be proportional to the concentration deviation from the equilibrium state. The equilibrium models applied were Langmuir, coefficient distribution, and Freundlich with the values of the parameters obtained from equilibrium results. It turned out that the models set-up can quantitatively describe the experimental kinetics data and the adjustment of the values of equilibrium isotherm parameters significantly improves the accuracy of the model. And then the value of mass transfer coefficient per unit adsorbent mass (kca) is obtained by curve fitting.

Keywords: adsorption equilibrium, adsorption kinetics, oryzanol, rice bran oil

Procedia PDF Downloads 306
8432 Bacterial Community Diversity in Soil under Two Tillage Systems

Authors: Dalia Ambrazaitienė, Monika Vilkienė, Danute Karcauskienė, Gintaras Siaudinis

Abstract:

The soil is a complex ecosystem that is part of our biosphere. The ability of soil to provide ecosystem services is dependent on microbial diversity. T Tillage is one of the major factors that affect soil properties. The no-till systems or shallow ploughless tillage are opposite of traditional deep ploughing, no-tillage systems, for instance, increase soil organic matter by reducing mineralization rates and stimulating litter concentrations of the top soil layer, whereas deep ploughing increases the biological activity of arable soil layer and reduces the incidence of weeds. The role of soil organisms is central to soil processes. Although the number of microbial species in soil is still being debated, the metagenomic approach to estimate microbial diversity predicted about 2000 – 18 000 bacterial genomes in 1 g of soil. Despite the key role of bacteria in soil processes, there is still lack of information about the bacterial diversity of soils as affected by tillage practices. This study focused on metagenomic analysis of bacterial diversity in long-term experimental plots of Dystric Epihypogleyic Albeluvisols in western part of Lithuania. The experiment was set up in 2013 and had a split-plot design where the whole-plot treatments were laid out in a randomized design with three replicates. The whole-plot treatments consisted of two tillage methods - deep ploughing (22-25 cm) (DP), ploughless tillage (7-10 cm) (PT). Three subsamples (0-20 cm) were collected on October 22, 2015 for each of the three replicates. Subsamples from the DP and PT systems were pooled together wise to make two composition samples, one representing deep ploughing (DP) and the other ploughless tillage (PT). Genomic DNA from soil sample was extracted from approximately 200 mg field-moist soil by using the D6005 Fungal/Bacterial Miniprep set (Zymo Research®) following the manufacturer’s instructions. To determine bacterial diversity and community composition, we employed a culture – independent approach of high-throughput pyrosequencing of the 16S rRNA gene. Metagenomic sequencing was made with Illumina MiSeq platform in Base Clear Company. The microbial component of soil plays a crucial role in cycling of nutrients in biosphere. Our study was a preliminary attempt at observing bacterial diversity in soil under two common but contrasting tillage practices. The number of sequenced reads obtained for PT (161 917) was higher than DP (131 194). The 10 most abundant genus in soil sample were the same (Arthrobacter, Candidatus Saccharibacteria, Actinobacteria, Acidobacterium, Mycobacterium, Bacillus, Alphaproteobacteria, Longilinea, Gemmatimonas, Solirubrobacter), just the percent of community part was different. In DP the Arthrobacter and Acidobacterium consist respectively 8.4 % and 2.5%, meanwhile in PT just 5.8% and 2.1% of all community. The Nocardioides and Terrabacter were observed just in PT. This work was supported by the project VP1-3.1-ŠMM-01-V-03-001 NKPDOKT and National Science Program: The effect of long-term, different-intensity management of resources on the soils of different genesis and on other components of the agro-ecosystems [grant number SIT-9/2015] funded by the Research Council of Lithuania.

Keywords: deep ploughing, metagenomics, ploughless tillage, soil community analysis

Procedia PDF Downloads 229
8431 Vibration of a Beam on an Elastic Foundation Using the Variational Iteration Method

Authors: Desmond Adair, Kairat Ismailov, Martin Jaeger

Abstract:

Modelling of Timoshenko beams on elastic foundations has been widely used in the analysis of buildings, geotechnical problems, and, railway and aerospace structures. For the elastic foundation, the most widely used models are one-parameter mechanical models or two-parameter models to include continuity and cohesion of typical foundations, with the two-parameter usually considered the better of the two. Knowledge of free vibration characteristics of beams on an elastic foundation is considered necessary for optimal design solutions in many engineering applications, and in this work, the efficient and accurate variational iteration method is developed and used to calculate natural frequencies of a Timoshenko beam on a two-parameter foundation. The variational iteration method is a technique capable of dealing with some linear and non-linear problems in an easy and efficient way. The calculations are compared with those using a finite-element method and other analytical solutions, and it is shown that the results are accurate and are obtained efficiently. It is found that the effect of the presence of the two-parameter foundation is to increase the beam’s natural frequencies and this is thought to be because of the shear-layer stiffness, which has an effect on the elastic stiffness. By setting the two-parameter model’s stiffness parameter to zero, it is possible to obtain a one-parameter foundation model, and so, comparison between the two foundation models is also made.

Keywords: Timoshenko beam, variational iteration method, two-parameter elastic foundation model

Procedia PDF Downloads 179