Search results for: scattering operator
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 802

Search results for: scattering operator

262 Surface Characterization and Femtosecond-Nanosecond Transient Absorption Dynamics of Bioconjugated Gold Nanoparticles: Insight into the Warfarin Drug-Binding Site of Human Serum Albumin

Authors: Osama K. Abou-Zied, Saba A. Sulaiman

Abstract:

We studied the spectroscopy of 25-nm diameter gold nanoparticles (AuNPs), coated with human serum albumin (HSA) as a model drug carrier. The morphology and coating of the AuNPs were examined using transmission electron microscopy and dynamic light scattering. Resonance energy transfer from the sole tryptophan of HSA (Trp214) to the AuNPs was observed in which the fluorescence quenching of Trp214 is dominated by a static mechanism. Using fluorescein (FL) to probe the warfarin drug-binding site in HSA revealed the unchanged nature of the binding cavity on the surface of the AuNPs, indicating the stability of the protein structure on the metal surface. The transient absorption results of the surface plasmonic resonance (SPR) band of the AuNPs show three ultrafast dynamics that are involved in the relaxation process after excitation at 460 nm. The three decay components were assigned to the electron-electron (~ 400 fs), electron-phonon (~ 2.0 ps) and phonon-phonon (200–250 ps) interactions. These dynamics were not changed upon coating the AuNPs with HSA which indicates the chemical and physical stability of the AuNPs upon bioconjugation. Binding of FL in HSA did not have any measurable effect on the bleach recovery dynamics of the SPR band, although both FL and AuNPs were excited at 460 nm. The current study is important for a better understanding of the physical and dynamical properties of protein-coated metal nanoparticles which are expected to help in optimizing their properties for critical applications in nanomedicine.

Keywords: gold nanoparticles, human serum albumin, fluorescein, femtosecond transient absorption

Procedia PDF Downloads 332
261 Multi-Objective Optimization in Carbon Abatement Technology Cycles (CAT) and Related Areas: Survey, Developments and Prospects

Authors: Hameed Rukayat Opeyemi, Pericles Pilidis, Pagone Emanuele

Abstract:

An infinitesimal increase in performance can have immense reduction in operating and capital expenses in a power generation system. Therefore, constant studies are being carried out to improve both conventional and novel power cycles. Globally, power producers are constantly researching on ways to minimize emission and to collectively downsize the total cost rate of power plants. A substantial spurt of developmental technologies of low carbon cycles have been suggested and studied, however they all have their limitations and financial implication. In the area of carbon abatement in power plants, three major objectives conflict: The cost rate of the plant, Power output and Environmental impact. Since, an increase in one of this parameter directly affects the other. This poses a multi-objective problem. It is paramount to be able to discern the point where improving one objective affects the other. Hence, the need for a Pareto-based optimization algorithm. Pareto-based optimization algorithm helps to find those points where improving one objective influences another objective negatively and stops there. The application of Pareto-based optimization algorithm helps the user/operator/designer make an informed decision. This paper sheds more light on areas that multi-objective optimization has been applied in carbon abatement technologies in the last five years, developments and prospects.

Keywords: gas turbine, low carbon technology, pareto optimal, multi-objective optimization

Procedia PDF Downloads 791
260 IoT Based Agriculture Monitoring Framework for Sustainable Rice Production

Authors: Armanul Hoque Shaon, Md Baizid Mahmud, Askander Nobi, Md. Raju Ahmed, Md. Jiabul Hoque

Abstract:

In the Internet of Things (IoT), devices are linked to the internet through a wireless network, allowing them to collect and transmit data without the need for a human operator. Agriculture relies heavily on wireless sensors, which are a vital component of the Internet of Things (IoT). This kind of wireless sensor network monitors physical or environmental variables like temperatures, sound, vibration, pressure, or motion without relying on a central location or sink and collaboratively passes its data across the network to be analyzed. As the primary source of plant nutrients, the soil is critical to the agricultural industry's continued growth. We're excited about the prospect of developing an Internet of Things (IoT) solution. To arrange the network, the sink node collects groundwater levels and sends them to the Gateway, which centralizes the data and forwards it to the sensor nodes. The sink node gathers soil moisture data, transmits the mean to the Gateways, and then forwards it to the website for dissemination. The web server is in charge of storing and presenting the moisture in the soil data to the web application's users. Soil characteristics may be collected using a networked method that we developed to improve rice production. Paddy land is running out as the population of our nation grows. The success of this project will be dependent on the appropriate use of the existing land base.

Keywords: IoT based agriculture monitoring, intelligent irrigation, communicating network, rice production

Procedia PDF Downloads 153
259 The Non-Stationary BINARMA(1,1) Process with Poisson Innovations: An Application on Accident Data

Authors: Y. Sunecher, N. Mamode Khan, V. Jowaheer

Abstract:

This paper considers the modelling of a non-stationary bivariate integer-valued autoregressive moving average of order one (BINARMA(1,1)) with correlated Poisson innovations. The BINARMA(1,1) model is specified using the binomial thinning operator and by assuming that the cross-correlation between the two series is induced by the innovation terms only. Based on these assumptions, the non-stationary marginal and joint moments of the BINARMA(1,1) are derived iteratively by using some initial stationary moments. As regards to the estimation of parameters of the proposed model, the conditional maximum likelihood (CML) estimation method is derived based on thinning and convolution properties. The forecasting equations of the BINARMA(1,1) model are also derived. A simulation study is also proposed where BINARMA(1,1) count data are generated using a multivariate Poisson R code for the innovation terms. The performance of the BINARMA(1,1) model is then assessed through a simulation experiment and the mean estimates of the model parameters obtained are all efficient, based on their standard errors. The proposed model is then used to analyse a real-life accident data on the motorway in Mauritius, based on some covariates: policemen, daily patrol, speed cameras, traffic lights and roundabouts. The BINARMA(1,1) model is applied on the accident data and the CML estimates clearly indicate a significant impact of the covariates on the number of accidents on the motorway in Mauritius. The forecasting equations also provide reliable one-step ahead forecasts.

Keywords: non-stationary, BINARMA(1, 1) model, Poisson innovations, conditional maximum likelihood, CML

Procedia PDF Downloads 129
258 Unveiling the Self-Assembly Behavior and Salt-Induced Morphological Transition of Double PEG-Tailed Unconventional Amphiphiles

Authors: Rita Ghosh, Joykrishna Dey

Abstract:

PEG-based amphiphiles are of tremendous importance for its widespread applications in pharmaceutics, household purposes, and drug delivery. Previously, a number of single PEG-tailed amphiphiles having significant applications have been reported from our group. Therefore, it was of immense interest to explore the properties and application potential of PEG-based double tailed amphiphiles. Herein, for the first time, two novel double PEG-tailed amphiphiles having different PEG chain lengths have been developed. The self-assembly behavior of the newly developed amphiphiles in aqueous buffer (pH 7.0) was thoroughly investigated at 25 oC by a number of techniques including, 1H-NMR, and steady-state and time-dependent fluorescence spectroscopy, dynamic light scattering, transmission electron microscopy, atomic force microscopy, and isothermal titration calorimetry. Despite having two polar PEG chains both molecules were found to have strong tendency to self-assemble in aqueous buffered solution above a very low concentration. Surprisingly, the amphiphiles were shown to form stable vesicles spontaneously at room temperature without any external stimuli. The results of calorimetric measurements showed that the vesicle formation is driven by the hydrophobic effect (positive entropy change) of the system, which is associated with the helix-to-random coil transition of the PEG chain. The spectroscopic data confirmed that the bilayer membrane of the vesicles is constituted by the PEG chains of the amphiphilic molecule. Interestingly, the vesicles were also found to exhibit structural transitions upon addition of salts in solution. These properties of the vesicles enable them as potential candidate for drug delivery.

Keywords: double-tailed amphiphiles, fluorescence, microscopy, PEG, vesicles

Procedia PDF Downloads 117
257 Evaluating and Reducing Aircraft Technical Delays and Cancellations Impact on Reliability Operational: Case Study of Airline Operator

Authors: Adel A. Ghobbar, Ahmad Bakkar

Abstract:

Although special care is given to maintenance, aircraft systems fail, and these failures cause delays and cancellations. The occurrence of Delays and Cancellations affects operators and manufacturers negatively. To reduce technical delays and cancellations, one should be able to determine the important systems causing them. The goal of this research is to find a method to define the most expensive delays and cancellations systems for Airline operators. A predictive model was introduced to forecast the failure and their impact after carrying out research that identifies relevant information to tackle the problems faced while answering the questions of this paper. Data were obtained from the manufacturers’ services reliability team database. Subsequently, delays and cancellations evaluation methods were identified. No cost estimation methods were used due to their complexity. The model was developed, and it takes into account the frequency of delays and cancellations and uses weighting factors to give an indication of the severity of their duration. The weighting factors are based on customer experience. The data Analysis approach has shown that delays and cancellations events are not seasonal and do not follow any specific trends. The use of weighting factor does have an influence on the shortlist over short periods (Monthly) but not the analyzed period of three years. Landing gear and the navigation system are among the top 3 factors causing delays and cancellations for all three aircraft types. The results did confirm that the cooperation between certain operators and manufacture reduce the impact of delays and cancellations.

Keywords: reliability, availability, delays & cancellations, aircraft maintenance

Procedia PDF Downloads 132
256 Comparison of Two Anesthetic Methods during Interventional Neuroradiology Procedure: Propofol versus Sevoflurane Using Patient State Index

Authors: Ki Hwa Lee, Eunsu Kang, Jae Hong Park

Abstract:

Background: Interventional neuroradiology (INR) has been a rapidly growing and evolving neurosurgical part during the past few decades. Sevoflurane and propofol are both suitable anesthetics for INR procedure. Monitoring of depth of anesthesia is being used very widely. SEDLine™ monitor, a 4-channel processed EEG monitor, uses a proprietary algorithm to analyze the raw EEG signal and displays the Patient State Index (PSI) values. There are only a fewer studies examining the PSI in the neuro-anesthesia. We aimed to investigate the difference of PSI values and hemodynamic variables between sevoflurane and propofol anesthesia during INR procedure. Methods: We reviewed the medical records of patients who scheduled to undergo embolization of non-ruptured intracranial aneurysm by a single operator from May 2013 to December 2014, retrospectively. Sixty-five patients were categorized into two groups; sevoflurane (n = 33) vs propofol (n = 32) group. The PSI values, hemodynamic variables, and the use of hemodynamic drugs were analyzed. Results: Significant differences were seen between PSI values obtained during different perioperative stages in both two groups (P < 0.0001). The PSI values of propofol group were lower than that of sevoflurane group during INR procedure (P < 0.01). The patients in propofol group had more prolonged time of extubation and more phenylephrine requirement than sevoflurane group (p < 0.05). Anti-hypertensive drug was more administered to the patients during extubation in sevoflurane group (p < 0.05). Conclusions: The PSI can detect depth of anesthesia and changes of concentration of anesthetics during INR procedure. Extubation was faster in sevoflurane group, but smooth recovery was shown in propofol group.

Keywords: interventional neuroradiology, patient state index, propofol, sevoflurane

Procedia PDF Downloads 180
255 Implementation of a Monostatic Microwave Imaging System using a UWB Vivaldi Antenna

Authors: Babatunde Olatujoye, Binbin Yang

Abstract:

Microwave imaging is a portable, noninvasive, and non-ionizing imaging technique that employs low-power microwave signals to reveal objects in the microwave frequency range. This technique has immense potential for adoption in commercial and scientific applications such as security scanning, material characterization, and nondestructive testing. This work presents a monostatic microwave imaging setup using an Ultra-Wideband (UWB), low-cost, miniaturized Vivaldi antenna with a bandwidth of 1 – 6 GHz. The backscattered signals (S-parameters) of the Vivaldi antenna used for scanning targets were measured in the lab using a VNA. An automated two-dimensional (2-D) scanner was employed for the 2-D movement of the transceiver to collect the measured scattering data from different positions. The targets consist of four metallic objects, each with a distinct shape. Similar setup was also simulated in Ansys HFSS. A high-resolution Back Propagation Algorithm (BPA) was applied to both the simulated and experimental backscattered signals. The BPA utilizes the phase and amplitude information recorded over a two-dimensional aperture of 50 cm × 50 cm with a discreet step size of 2 cm to reconstruct a focused image of the targets. The adoption of BPA was demonstrated by coherently resolving and reconstructing reflection signals from conventional time-of-flight profiles. For both the simulation and experimental data, BPA accurately reconstructed a high resolution 2D image of the targets in terms of shape and location. An improvement of the BPA, in terms of target resolution, was achieved by applying the filtering method in frequency domain.

Keywords: back propagation, microwave imaging, monostatic, vivialdi antenna, ultra wideband

Procedia PDF Downloads 19
254 Interface Engineering of Short- and Ultrashort Period W-Based Multilayers for Soft X-Rays

Authors: A. E. Yakshin, D. Ijpes, J. M. Sturm, I. A. Makhotkin, M. D. Ackermann

Abstract:

Applications like synchrotron optics, soft X-ray microscopy, X-ray astronomy, and wavelength dispersive X-ray fluorescence (WD-XRF) rely heavily on short- and ultra-short-period multilayer (ML) structures. In WD-XRF, ML serves as an analyzer crystal to disperse emission lines of light elements. The key requirement for the ML is to be highly reflective while also providing sufficient angular dispersion to resolve specific XRF lines. For these reasons, MLs with periods ranging from 1.0 to 2.5 nm are of great interest in this field. Due to the short period, the reflectance of such MLs is extremely sensitive to interface imperfections such as roughness and interdiffusion. Moreover, the thickness of the individual layers is only a few angstroms, which is close to the limit of materials to grow a continuous film. MLs with a period between 2.5 nm and 1.0 nm, combining tungsten (W) reflector with B₄C, Si, and Al spacers, were created and examined. These combinations show high theoretical reflectance in the full range from C-Kα (4.48nm) down to S-Kα (0.54nm). However, the formation of optically unfavorable compounds, intermixing, and interface roughness result in limited reflectance. A variety of techniques, including diffusion barriers, seed layers, and ion polishing for sputter-deposited MLs, were used to address these issues. Diffuse scattering measurements, photo-electron spectroscopy analysis, and X-ray reflectivity measurements showed a noticeable reduction of compound formation, intermixing, and interface roughness. This also resulted in a substantial increase in soft X-ray reflectance for W/Si, W/B4C, and W/Al MLs. In particular, the reflectivity of 1 nm period W/Si multilayers at the wavelength of 0.84 nm increased more than 3-fold – propelling forward the applicability of such multilayers for shorter wavelengths.

Keywords: interface engineering, reflectance, short period multilayer structures, x-ray optics

Procedia PDF Downloads 50
253 General Architecture for Automation of Machine Learning Practices

Authors: U. Borasi, Amit Kr. Jain, Rakesh, Piyush Jain

Abstract:

Data collection, data preparation, model training, model evaluation, and deployment are all processes in a typical machine learning workflow. Training data needs to be gathered and organised. This often entails collecting a sizable dataset and cleaning it to remove or correct any inaccurate or missing information. Preparing the data for use in the machine learning model requires pre-processing it after it has been acquired. This often entails actions like scaling or normalising the data, handling outliers, selecting appropriate features, reducing dimensionality, etc. This pre-processed data is then used to train a model on some machine learning algorithm. After the model has been trained, it needs to be assessed by determining metrics like accuracy, precision, and recall, utilising a test dataset. Every time a new model is built, both data pre-processing and model training—two crucial processes in the Machine learning (ML) workflow—must be carried out. Thus, there are various Machine Learning algorithms that can be employed for every single approach to data pre-processing, generating a large set of combinations to choose from. Example: for every method to handle missing values (dropping records, replacing with mean, etc.), for every scaling technique, and for every combination of features selected, a different algorithm can be used. As a result, in order to get the optimum outcomes, these tasks are frequently repeated in different combinations. This paper suggests a simple architecture for organizing this largely produced “combination set of pre-processing steps and algorithms” into an automated workflow which simplifies the task of carrying out all possibilities.

Keywords: machine learning, automation, AUTOML, architecture, operator pool, configuration, scheduler

Procedia PDF Downloads 57
252 Carbonation of Wollastonite (001) competing Hydration: Microscopic Insights from Ion Spectroscopy and Density Functional Theory

Authors: Peter Thissen

Abstract:

In this work, we report about the influence of the chemical potential of water on the carbonation reaction of wollastonite (CaSiO3) as model surface of cement and concrete. Total energy calculations based on density functional theory (DFT) combined with kinetic barrier predictions based on nudge elastic band (NEB) method show that the exposure of the water-free wollastonite surface to CO2 results in a barrier-less carbonation. CO2 reacts with the surface oxygen and forms carbonate (CO32-) complexes together with a major reconstruction of the surface. The reaction comes to a standstill after one carbonate monolayer has been formed. In case one water monolayer is covering the wollastonite surface, the carbonation is no more barrier-less, yet ending in a localized monolayer. Covered with multilayers of water, the thermodynamic ground state of the wollastonite completely changes due to a metal-proton exchange reaction (MPER, also called early stage hydration) and Ca2+ ions are partially removed from solid phase into the H2O/wollastonite interface. Mobile Ca2+ react again with CO2 and form carbonate complexes, ending in a delocalized layer. By means of high resolution time-of-flight secondary-ion mass-spectroscopy images (ToF-SIMS), we confirm that hydration can lead to a partially delocalization of Ca2+ ions on wollastonite surfaces. Finally, we evaluate the impact of our model surface results by means of Low Energy Ion Scattering (LEIS) spectroscopy combined with careful discussion about the competing reactions of carbonation vs. hydration.

Keywords: Calcium-silicate, carbonation, hydration, metal-proton exchange reaction

Procedia PDF Downloads 363
251 Markov Random Field-Based Segmentation Algorithm for Detection of Land Cover Changes Using Uninhabited Aerial Vehicle Synthetic Aperture Radar Polarimetric Images

Authors: Mehrnoosh Omati, Mahmod Reza Sahebi

Abstract:

The information on land use/land cover changing plays an essential role for environmental assessment, planning and management in regional development. Remotely sensed imagery is widely used for providing information in many change detection applications. Polarimetric Synthetic aperture radar (PolSAR) image, with the discrimination capability between different scattering mechanisms, is a powerful tool for environmental monitoring applications. This paper proposes a new boundary-based segmentation algorithm as a fundamental step for land cover change detection. In this method, first, two PolSAR images are segmented using integration of marker-controlled watershed algorithm and coupled Markov random field (MRF). Then, object-based classification is performed to determine changed/no changed image objects. Compared with pixel-based support vector machine (SVM) classifier, this novel segmentation algorithm significantly reduces the speckle effect in PolSAR images and improves the accuracy of binary classification in object-based level. The experimental results on Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) polarimetric images show a 3% and 6% improvement in overall accuracy and kappa coefficient, respectively. Also, the proposed method can correctly distinguish homogeneous image parcels.

Keywords: coupled Markov random field (MRF), environment, object-based analysis, polarimetric SAR (PolSAR) images

Procedia PDF Downloads 217
250 Intensity-Enhanced Super-Resolution Amplitude Apodization Effect on the Non-Spherical Near-Field Particle-Lenses

Authors: Liyang Yue, Bing Yan, James N. Monks, Rakesh Dhama, Zengbo Wang, Oleg V. Minin, Igor V. Minin

Abstract:

A particle can function as a refractive lens to focus a plane wave, generating a narrow, high intensive, weak-diverging beam within a sub-wavelength volume, known as the ‘photonic jet’. Refractive index contrast (particle to background media) and scaling effect of the dielectric particle (relative-to-wavelength size) play key roles in photonic jet formation, rather than the shape of particle-lens. Waist (full width of half maximum, FWHM) of a photonic jet could be beyond the diffraction limit and smaller than the Airy disk, which defines the minimum distance between two objects to be imaged as two instead of one. Many important applications for imaging and sensing have been afforded based upon the super-resolution characteristic of the photonic jet. It is known that apodization method, in the form of an amplitude pupil-mask centrally situated on a particle-lens, can further reduce the waist of a photonic nanojet, however, usually lower its intensity at the focus due to blocking of the incident light. In this paper, the anomalously intensity-enhanced apodization effect was discovered in the near-field via numerical simulation. It was also experimentally verified by a scale model using a copper-masked Teflon cuboid solid immersion lens (SIL) with 22 mm side length under radiation of a plane wave with 8 mm wavelength. Peak intensity enhancement and the lateral resolution of the produced photonic jet increased by about 36.0 % and 36.4 % in this approach, respectively. This phenomenon may possess the scale effect and would be valid in multiple frequency bands.

Keywords: apodization, particle-lens, scattering, near-field optics

Procedia PDF Downloads 191
249 Development of a Few-View Computed Tomographic Reconstruction Algorithm Using Multi-Directional Total Variation

Authors: Chia Jui Hsieh, Jyh Cheng Chen, Chih Wei Kuo, Ruei Teng Wang, Woei Chyn Chu

Abstract:

Compressed sensing (CS) based computed tomographic (CT) reconstruction algorithm utilizes total variation (TV) to transform CT image into sparse domain and minimizes L1-norm of sparse image for reconstruction. Different from the traditional CS based reconstruction which only calculates x-coordinate and y-coordinate TV to transform CT images into sparse domain, we propose a multi-directional TV to transform tomographic image into sparse domain for low-dose reconstruction. Our method considers all possible directions of TV calculations around a pixel, so the sparse transform for CS based reconstruction is more accurate. In 2D CT reconstruction, we use eight-directional TV to transform CT image into sparse domain. Furthermore, we also use 26-directional TV for 3D reconstruction. This multi-directional sparse transform method makes CS based reconstruction algorithm more powerful to reduce noise and increase image quality. To validate and evaluate the performance of this multi-directional sparse transform method, we use both Shepp-Logan phantom and a head phantom as the targets for reconstruction with the corresponding simulated sparse projection data (angular sampling interval is 5 deg and 6 deg, respectively). From the results, the multi-directional TV method can reconstruct images with relatively less artifacts compared with traditional CS based reconstruction algorithm which only calculates x-coordinate and y-coordinate TV. We also choose RMSE, PSNR, UQI to be the parameters for quantitative analysis. From the results of quantitative analysis, no matter which parameter is calculated, the multi-directional TV method, which we proposed, is better.

Keywords: compressed sensing (CS), low-dose CT reconstruction, total variation (TV), multi-directional gradient operator

Procedia PDF Downloads 256
248 Clay Effect on PET/Clay and PEN/Clay Nanocomposites Properties

Authors: F. Zouai, F. Z. Benabid, S. Bouhelal, D. Benachour

Abstract:

Reinforced plastics or nanocomposites have attracted considerable attention in scientific and industrial fields because a very small amount of clay can significantly improve the properties of the polymer. The polymeric matrices used in this work are two saturated polyesters, i.e., polyethylene terephthalate (PET) and polyethylene naphthalate (PEN). The success of processing compatible blends, based on poly(ethylene terephthalate) (PET)/poly(ethylene naphthalene) (PEN)/clay nanocomposites in one step by reactive melt extrusion is described. Untreated clay was first purified and functionalized ‘in situ’ with a compound based on an organic peroxide/ sulfur mixture and (tetramethylthiuram disulfide) as the activator for sulfur. The PET and PEN materials were first separately mixed in the molten state with functionalized clay. The PET/4 wt% clay and PEN/7.5 wt% clay compositions showed total exfoliation. These compositions, denoted nPET and nPEN, respectively, were used to prepare new n(PET/PEN) nanoblends in the same mixing batch. The n(PET/PEN) nanoblends were compared to neat PET/PEN blends. The blends and nanocomposites were characterized using various techniques. Microstructural and nanostructural properties were investigated. Fourier transform infrared spectroscopy (FTIR) results showed that the exfoliation of tetrahedral clay nanolayers is complete, and the octahedral structure totally disappears. It was shown that total exfoliation, confirmed by wide-angle X-ray scattering (WAXS) measurements, contributes to the enhancement of impact strength and tensile modulus. In addition, WAXS results indicated that all samples are amorphous. The differential scanning calorimetry (DSC) study indicated the occurrence of one glass transition temperature Tg, one crystallization temperature Tc and one melting temperature Tm for every composition.

Keywords: exfoliation, DRX, DSC, montmorillonite, nanocomposites, PEN, PET, plastograph, reactive melt-mixing

Procedia PDF Downloads 325
247 A Flute Tracking System for Monitoring the Wear of Cutting Tools in Milling Operations

Authors: Hatim Laalej, Salvador Sumohano-Verdeja, Thomas McLeay

Abstract:

Monitoring of tool wear in milling operations is essential for achieving the desired dimensional accuracy and surface finish of a machined workpiece. Although there are numerous statistical models and artificial intelligence techniques available for monitoring the wear of cutting tools, these techniques cannot pin point which cutting edge of the tool, or which insert in the case of indexable tooling, is worn or broken. Currently, the task of monitoring the wear on the tool cutting edges is carried out by the operator who performs a manual inspection, causing undesirable stoppages of machine tools and consequently resulting in costs incurred from lost productivity. The present study is concerned with the development of a flute tracking system to segment signals related to each physical flute of a cutter with three flutes used in an end milling operation. The purpose of the system is to monitor the cutting condition for individual flutes separately in order to determine their progressive wear rates and to predict imminent tool failure. The results of this study clearly show that signals associated with each flute can be effectively segmented using the proposed flute tracking system. Furthermore, the results illustrate that by segmenting the sensor signal by flutes it is possible to investigate the wear in each physical cutting edge of the cutting tool. These findings are significant in that they facilitate the online condition monitoring of a cutting tool for each specific flute without the need for operators/engineers to perform manual inspections of the tool.

Keywords: machining, milling operation, tool condition monitoring, tool wear prediction

Procedia PDF Downloads 302
246 A Large Ion Collider Experiment (ALICE) Diffractive Detector Control System for RUN-II at the Large Hadron Collider

Authors: J. C. Cabanillas-Noris, M. I. Martínez-Hernández, I. León-Monzón

Abstract:

The selection of diffractive events in the ALICE experiment during the first data taking period (RUN-I) of the Large Hadron Collider (LHC) was limited by the range over which rapidity gaps occur. It would be possible to achieve better measurements by expanding the range in which the production of particles can be detected. For this purpose, the ALICE Diffractive (AD0) detector has been installed and commissioned for the second phase (RUN-II). Any new detector should be able to take the data synchronously with all other detectors and be operated through the ALICE central systems. One of the key elements that must be developed for the AD0 detector is the Detector Control System (DCS). The DCS must be designed to operate safely and correctly this detector. Furthermore, the DCS must also provide optimum operating conditions for the acquisition and storage of physics data and ensure these are of the highest quality. The operation of AD0 implies the configuration of about 200 parameters, from electronics settings and power supply levels to the archiving of operating conditions data and the generation of safety alerts. It also includes the automation of procedures to get the AD0 detector ready for taking data in the appropriate conditions for the different run types in ALICE. The performance of AD0 detector depends on a certain number of parameters such as the nominal voltages for each photomultiplier tube (PMT), their threshold levels to accept or reject the incoming pulses, the definition of triggers, etc. All these parameters define the efficiency of AD0 and they have to be monitored and controlled through AD0 DCS. Finally, AD0 DCS provides the operator with multiple interfaces to execute these tasks. They are realized as operating panels and scripts running in the background. These features are implemented on a SCADA software platform as a distributed control system which integrates to the global control system of the ALICE experiment.

Keywords: AD0, ALICE, DCS, LHC

Procedia PDF Downloads 305
245 Evaluation of the Effect of Turbulence Caused by the Oscillation Grid on Oil Spill in Water Column

Authors: Mohammad Ghiasvand, Babak Khorsandi, Morteza Kolahdoozan

Abstract:

Under the influence of waves, oil in the sea is subject to vertical scattering in the water column. Scientists' knowledge of how oil is dispersed in the water column is one of the lowest levels of knowledge among other processes affecting oil in the marine environment, which highlights the need for research and study in this field. Therefore, this study investigates the distribution of oil in the water column in a turbulent environment with zero velocity characteristics. Lack of laboratory results to analyze the distribution of petroleum pollutants in deep water for information Phenomenon physics on the one hand and using them to calibrate numerical models on the other hand led to the development of laboratory models in research. According to the aim of the present study, which is to investigate the distribution of oil in homogeneous and isotropic turbulence caused by the oscillating Grid, after reaching the ideal conditions, the crude oil flow was poured onto the water surface and oil was distributed in deep water due to turbulence was investigated. In this study, all experimental processes have been implemented and used for the first time in Iran, and the study of oil diffusion in the water column was considered one of the key aspects of pollutant diffusion in the oscillating Grid environment. Finally, the required oscillation velocities were taken at depths of 10, 15, 20, and 25 cm from the water surface and used in the analysis of oil diffusion due to turbulence parameters. The results showed that with the characteristics of the present system in two static modes and network motion with a frequency of 0.8 Hz, the results of oil diffusion in the four mentioned depths at a frequency of 0.8 Hz compared to the static mode from top to bottom at 26.18, 57 31.5, 37.5 and 50% more. Also, after 2.5 minutes of the oil spill at a frequency of 0.8 Hz, oil distribution at the mentioned depths increased by 49, 61.5, 85, and 146.1%, respectively, compared to the base (static) state.

Keywords: homogeneous and isotropic turbulence, oil distribution, oscillating grid, oil spill

Procedia PDF Downloads 75
244 A Picture is worth a Billion Bits: Real-Time Image Reconstruction from Dense Binary Pixels

Authors: Tal Remez, Or Litany, Alex Bronstein

Abstract:

The pursuit of smaller pixel sizes at ever increasing resolution in digital image sensors is mainly driven by the stringent price and form-factor requirements of sensors and optics in the cellular phone market. Recently, Eric Fossum proposed a novel concept of an image sensor with dense sub-diffraction limit one-bit pixels (jots), which can be considered a digital emulation of silver halide photographic film. This idea has been recently embodied as the EPFL Gigavision camera. A major bottleneck in the design of such sensors is the image reconstruction process, producing a continuous high dynamic range image from oversampled binary measurements. The extreme quantization of the Poisson statistics is incompatible with the assumptions of most standard image processing and enhancement frameworks. The recently proposed maximum-likelihood (ML) approach addresses this difficulty, but suffers from image artifacts and has impractically high computational complexity. In this work, we study a variant of a sensor with binary threshold pixels and propose a reconstruction algorithm combining an ML data fitting term with a sparse synthesis prior. We also show an efficient hardware-friendly real-time approximation of this inverse operator. Promising results are shown on synthetic data as well as on HDR data emulated using multiple exposures of a regular CMOS sensor.

Keywords: binary pixels, maximum likelihood, neural networks, sparse coding

Procedia PDF Downloads 201
243 An Operators’ Real-sense-based Fire Simulation for Human Factors Validation in Nuclear Power Plants

Authors: Sa-Kil Kim, Jang-Soo Lee

Abstract:

On March 31, 1993, a severe fire accident took place in a nuclear power plant located in Narora in North India. The event involved a major fire in the turbine building of NAPS unit-1 and resulted in a total loss of power to the unit for 17 hours. In addition, there was a heavy ingress of smoke in the control room, mainly through the intake of the ventilation system, forcing the operators to vacate the control room. The Narora fire accident provides us lessons indicating that operators could lose their mind and predictable behaviors during a fire. After the Fukushima accident, which resulted from a natural disaster, unanticipated external events are also required to be prepared and controlled for the ultimate safety of nuclear power plants. From last year, our research team has developed a test and evaluation facility that can simulate external events such as an earthquake and fire based on the operators’ real-sense. As one of the results of the project, we proposed a unit real-sense-based facility that can simulate fire events in a control room for utilizing a test-bed of human factor validation. The test-bed has the operator’s workstation shape and functions to simulate fire conditions such as smoke, heat, and auditory alarms in accordance with the prepared fire scenarios. Furthermore, the test-bed can be used for the operators’ training and experience.

Keywords: human behavior in fire, human factors validation, nuclear power plants, real-sense-based fire simulation

Procedia PDF Downloads 283
242 Long Wavelength Coherent Pulse of Sound Propagating in Granular Media

Authors: Rohit Kumar Shrivastava, Amalia Thomas, Nathalie Vriend, Stefan Luding

Abstract:

A mechanical wave or vibration propagating through granular media exhibits a specific signature in time. A coherent pulse or wavefront arrives first with multiply scattered waves (coda) arriving later. The coherent pulse is micro-structure independent i.e. it depends only on the bulk properties of the disordered granular sample, the sound wave velocity of the granular sample and hence bulk and shear moduli. The coherent wavefront attenuates (decreases in amplitude) and broadens with distance from its source. The pulse attenuation and broadening effects are affected by disorder (polydispersity; contrast in size of the granules) and have often been attributed to dispersion and scattering. To study the effect of disorder and initial amplitude (non-linearity) of the pulse imparted to the system on the coherent wavefront, numerical simulations have been carried out on one-dimensional sets of particles (granular chains). The interaction force between the particles is given by a Hertzian contact model. The sizes of particles have been selected randomly from a Gaussian distribution, where the standard deviation of this distribution is the relevant parameter that quantifies the effect of disorder on the coherent wavefront. Since, the coherent wavefront is system configuration independent, ensemble averaging has been used for improving the signal quality of the coherent pulse and removing the multiply scattered waves. The results concerning the width of the coherent wavefront have been formulated in terms of scaling laws. An experimental set-up of photoelastic particles constituting a granular chain is proposed to validate the numerical results.

Keywords: discrete elements, Hertzian contact, polydispersity, weakly nonlinear, wave propagation

Procedia PDF Downloads 204
241 Robust Heart Rate Estimation from Multiple Cardiovascular and Non-Cardiovascular Physiological Signals Using Signal Quality Indices and Kalman Filter

Authors: Shalini Rankawat, Mansi Rankawat, Rahul Dubey, Mazad Zaveri

Abstract:

Physiological signals such as electrocardiogram (ECG) and arterial blood pressure (ABP) in the intensive care unit (ICU) are often seriously corrupted by noise, artifacts, and missing data, which lead to errors in the estimation of heart rate (HR) and incidences of false alarm from ICU monitors. Clinical support in ICU requires most reliable heart rate estimation. Cardiac activity, because of its relatively high electrical energy, may introduce artifacts in Electroencephalogram (EEG), Electrooculogram (EOG), and Electromyogram (EMG) recordings. This paper presents a robust heart rate estimation method by detection of R-peaks of ECG artifacts in EEG, EMG & EOG signals, using energy-based function and a novel Signal Quality Index (SQI) assessment technique. SQIs of physiological signals (EEG, EMG, & EOG) were obtained by correlation of nonlinear energy operator (teager energy) of these signals with either ECG or ABP signal. HR is estimated from ECG, ABP, EEG, EMG, and EOG signals from separate Kalman filter based upon individual SQIs. Data fusion of each HR estimate was then performed by weighing each estimate by the Kalman filters’ SQI modified innovations. The fused signal HR estimate is more accurate and robust than any of the individual HR estimate. This method was evaluated on MIMIC II data base of PhysioNet from bedside monitors of ICU patients. The method provides an accurate HR estimate even in the presence of noise and artifacts.

Keywords: ECG, ABP, EEG, EMG, EOG, ECG artifacts, Teager-Kaiser energy, heart rate, signal quality index, Kalman filter, data fusion

Procedia PDF Downloads 694
240 Improving Trainings of Mineral Processing Operators Through Gamification and Modelling and Simulation

Authors: Pedro A. S. Bergamo, Emilia S. Streng, Jan Rosenkranz, Yousef Ghorbani

Abstract:

Within the often-hazardous mineral industry, simulation training has speedily gained appreciation as an important method of increasing site safety and productivity through enhanced operator skill and knowledge. Performance calculations related to froth flotation, one of the most important concentration methods, is probably the hardest topic taught during the training of plant operators. Currently, most training teach those skills by traditional methods like slide presentations and hand-written exercises with a heavy focus on memorization. To optimize certain aspects of these pieces of training, we developed “MinFloat”, which teaches the operation formulas of the froth flotation process with the help of gamification. The simulation core based on a first-principles flotation model was implemented in Unity3D and an instructor tutoring system was developed, which presents didactic content and reviews the selected answers. The game was tested by 25 professionals with extensive experience in the mining industry based on a questionnaire formulated for training evaluations. According to their feedback, the game scored well in terms of quality, didactic efficacy and inspiring character. The feedback of the testers on the main target audience and the outlook of the mentioned solution is presented. This paper aims to provide technical background on the construction of educational games for the mining industry besides showing how feedback from experts can more efficiently be gathered thanks to new technologies such as online forms.

Keywords: training evaluation, simulation based training, modelling, and simulation, froth flotation

Procedia PDF Downloads 113
239 Bi-Criteria Vehicle Routing Problem for Possibility Environment

Authors: Bezhan Ghvaberidze

Abstract:

A multiple criteria optimization approach for the solution of the Fuzzy Vehicle Routing Problem (FVRP) is proposed. For the possibility environment the levels of movements between customers are calculated by the constructed simulation interactive algorithm. The first criterion of the bi-criteria optimization problem - minimization of the expectation of total fuzzy travel time on closed routes is constructed for the FVRP. A new, second criterion – maximization of feasibility of movement on the closed routes is constructed by the Choquet finite averaging operator. The FVRP is reduced to the bi-criteria partitioning problem for the so called “promising” routes which were selected from the all admissible closed routes. The convenient selection of the “promising” routes allows us to solve the reduced problem in the real-time computing. For the numerical solution of the bi-criteria partitioning problem the -constraint approach is used. An exact algorithm is implemented based on D. Knuth’s Dancing Links technique and the algorithm DLX. The Main objective was to present the new approach for FVRP, when there are some difficulties while moving on the roads. This approach is called FVRP for extreme conditions (FVRP-EC) on the roads. Also, the aim of this paper was to construct the solving model of the constructed FVRP. Results are illustrated on the numerical example where all Pareto-optimal solutions are found. Also, an approach for more complex model FVRP with time windows was developed. A numerical example is presented in which optimal routes are constructed for extreme conditions on the roads.

Keywords: combinatorial optimization, Fuzzy Vehicle routing problem, multiple objective programming, possibility theory

Procedia PDF Downloads 485
238 High Aspect Ratio Sio2 Capillary Based On Silicon Etching and Thermal Oxidation Process for Optical Modulator

Authors: Nguyen Van Toan, Suguru Sangu, Tetsuro Saito, Naoki Inomata, Takahito Ono

Abstract:

This paper presents the design and fabrication of an optical window for an optical modulator toward image sensing applications. An optical window consists of micrometer-order SiO2 capillaries (porous solid) that can modulate transmission light intensity by moving the liquid in and out of porous solid. A high optical transmittance of the optical window can be achieved due to refractive index matching when the liquid is penetrated into the porous solid. Otherwise, its light transmittance is lower because of light reflection and scattering by air holes and capillary walls. Silicon capillaries fabricated by deep reactive ion etching (DRIE) process are completely oxidized to form the SiO2 capillaries. Therefore, high aspect ratio SiO2 capillaries can be achieved based on silicon capillaries formed by DRIE technique. Large compressive stress of the oxide causes bending of the capillary structure, which is reduced by optimizing the design of device structure. The large stress of the optical window can be released via thin supporting beams. A 7.2 mm x 9.6 mm optical window area toward a fully integrated with the image sensor format is successfully fabricated and its optical transmittance is evaluated with and without inserting liquids (ethanol and matching oil). The achieved modulation range is approximately 20% to 35% with and without liquid penetration in visible region (wavelength range from 450 nm to 650 nm).

Keywords: thermal oxidation process, SiO2 capillaries, optical window, light transmittance, image sensor, liquid penetration

Procedia PDF Downloads 491
237 Lipid-Chitosan Hybrid Nanoparticles for Controlled Delivery of Cisplatin

Authors: Muhammad Muzamil Khan, Asadullah Madni, Nina Filipczek, Jiayi Pan, Nayab Tahir, Hassan Shah, Vladimir Torchilin

Abstract:

Lipid-polymer hybrid nanoparticles (LPHNP) are delivery systems for controlled drug delivery at tumor sites. The superior biocompatible properties of lipid and structural advantages of polymer can be obtained via this system for controlled drug delivery. In the present study, cisplatin-loaded lipid-chitosan hybrid nanoparticles were formulated by the single step ionic gelation method based on ionic interaction of positively charged chitosan and negatively charged lipid. Formulations with various chitosan to lipid ratio were investigated to obtain the optimal particle size, encapsulation efficiency, and controlled release pattern. Transmission electron microscope and dynamic light scattering analysis demonstrated a size range of 181-245 nm and a zeta potential range of 20-30 mV. Compatibility among the components and the stability of formulation were demonstrated with FTIR analysis and thermal studies, respectively. The therapeutic efficacy and cellular interaction of cisplatin-loaded LPHNP were investigated using in vitro cell-based assays in A2780/ADR ovarian carcinoma cell line. Additionally, the cisplatin loaded LPHNP exhibited a low toxicity profile in rats. The in-vivo pharmacokinetics study also proved a controlled delivery of cisplatin with enhanced mean residual time and half-life. Our studies suggested that the cisplatin-loaded LPHNP being a promising platform for controlled delivery of cisplatin in cancer therapy.

Keywords: cisplatin, lipid-polymer hybrid nanoparticle, chitosan, in vitro cell line study

Procedia PDF Downloads 130
236 Diagnostic Accuracy of the Tuberculin Skin Test for Tuberculosis Diagnosis: Interest of Using ROC Curve and Fagan’s Nomogram

Authors: Nouira Mariem, Ben Rayana Hazem, Ennigrou Samir

Abstract:

Background and aim: During the past decade, the frequency of extrapulmonary forms of tuberculosis has increased. These forms are under-diagnosed using conventional tests. The aim of this study was to evaluate the performance of the Tuberculin Skin Test (TST) for the diagnosis of tuberculosis, using the ROC curve and Fagan’s Nomogram methodology. Methods: This was a case-control, multicenter study in 11 anti-tuberculosis centers in Tunisia, during the period from June to November2014. The cases were adults aged between 18 and 55 years with confirmed tuberculosis. Controls were free from tuberculosis. A data collection sheet was filled out and a TST was performed for each participant. Diagnostic accuracy measures of TST were estimated using ROC curve and Area Under Curve to estimate sensitivity and specificity of a determined cut-off point. Fagan’s nomogram was used to estimate its predictive values. Results: Overall, 1053 patients were enrolled, composed of 339 cases (sex-ratio (M/F)=0.87) and 714 controls (sex-ratio (M/F)=0.99). The mean age was 38.3±11.8 years for cases and 33.6±11 years for controls. The mean diameter of the TST induration was significantly higher among cases than controls (13.7mm vs.6.2mm;p=10-6). Area Under Curve was 0.789 [95% CI: 0.758-0.819; p=0.01], corresponding to a moderate discriminating power for this test. The most discriminative cut-off value of the TST, which were associated with the best sensitivity (73.7%) and specificity (76.6%) couple was about 11 mm with a Youden index of 0.503. Positive and Negative predictive values were 3.11% and 99.52%, respectively. Conclusion: In view of these results, we can conclude that the TST can be used for tuberculosis diagnosis with a good sensitivity and specificity. However, the skin induration measurement and its interpretation is operator dependent and remains difficult and subjective. The combination of the TST with another test such as the Quantiferon test would be a good alternative.

Keywords: tuberculosis, tuberculin skin test, ROC curve, cut-off

Procedia PDF Downloads 67
235 Normalized Enterprises Architectures: Portugal's Public Procurement System Application

Authors: Tiago Sampaio, André Vasconcelos, Bruno Fragoso

Abstract:

The Normalized Systems Theory, which is designed to be applied to software architectures, provides a set of theorems, elements and rules, with the purpose of enabling evolution in Information Systems, as well as ensuring that they are ready for change. In order to make that possible, this work’s solution is to apply the Normalized Systems Theory to the domain of enterprise architectures, using Archimate. This application is achieved through the adaptation of the elements of this theory, making them artifacts of the modeling language. The theorems are applied through the identification of the viewpoints to be used in the architectures, as well as the transformation of the theory’s encapsulation rules into architectural rules. This way, it is possible to create normalized enterprise architectures, thus fulfilling the needs and requirements of the business. This solution was demonstrated using the Portuguese Public Procurement System. The Portuguese government aims to make this system as fair as possible, allowing every organization to have the same business opportunities. The aim is for every economic operator to have access to all public tenders, which are published in any of the 6 existing platforms, independently of where they are registered. In order to make this possible, we applied our solution to the construction of two different architectures, which are able of fulfilling the requirements of the Portuguese government. One of those architectures, TO-BE A, has a Message Broker that performs the communication between the platforms. The other, TO-BE B, represents the scenario in which the platforms communicate with each other directly. Apart from these 2 architectures, we also represent the AS-IS architecture that demonstrates the current behavior of the Public Procurement Systems. Our evaluation is based on a comparison between the AS-IS and the TO-BE architectures, regarding the fulfillment of the rules and theorems of the Normalized Systems Theory and some quality metrics.

Keywords: archimate, architecture, broker, enterprise, evolvable systems, interoperability, normalized architectures, normalized systems, normalized systems theory, platforms

Procedia PDF Downloads 356
234 Hysteresis Modeling in Iron-Dominated Magnets Based on a Deep Neural Network Approach

Authors: Maria Amodeo, Pasquale Arpaia, Marco Buzio, Vincenzo Di Capua, Francesco Donnarumma

Abstract:

Different deep neural network architectures have been compared and tested to predict magnetic hysteresis in the context of pulsed electromagnets for experimental physics applications. Modelling quasi-static or dynamic major and especially minor hysteresis loops is one of the most challenging topics for computational magnetism. Recent attempts at mathematical prediction in this context using Preisach models could not attain better than percent-level accuracy. Hence, this work explores neural network approaches and shows that the architecture that best fits the measured magnetic field behaviour, including the effects of hysteresis and eddy currents, is the nonlinear autoregressive exogenous neural network (NARX) model. This architecture aims to achieve a relative RMSE of the order of a few 100 ppm for complex magnetic field cycling, including arbitrary sequences of pseudo-random high field and low field cycles. The NARX-based architecture is compared with the state-of-the-art, showing better performance than the classical operator-based and differential models, and is tested on a reference quadrupole magnetic lens used for CERN particle beams, chosen as a case study. The training and test datasets are a representative example of real-world magnet operation; this makes the good result obtained very promising for future applications in this context.

Keywords: deep neural network, magnetic modelling, measurement and empirical software engineering, NARX

Procedia PDF Downloads 130
233 Potassium-Phosphorus-Nitrogen Detection and Spectral Segmentation Analysis Using Polarized Hyperspectral Imagery and Machine Learning

Authors: Nicholas V. Scott, Jack McCarthy

Abstract:

Military, law enforcement, and counter terrorism organizations are often tasked with target detection and image characterization of scenes containing explosive materials in various types of environments where light scattering intensity is high. Mitigation of this photonic noise using classical digital filtration and signal processing can be difficult. This is partially due to the lack of robust image processing methods for photonic noise removal, which strongly influence high resolution target detection and machine learning-based pattern recognition. Such analysis is crucial to the delivery of reliable intelligence. Polarization filters are a possible method for ambient glare reduction by allowing only certain modes of the electromagnetic field to be captured, providing strong scene contrast. An experiment was carried out utilizing a polarization lens attached to a hyperspectral imagery camera for the purpose of exploring the degree to which an imaged polarized scene of potassium, phosphorus, and nitrogen mixture allows for improved target detection and image segmentation. Preliminary imagery results based on the application of machine learning algorithms, including competitive leaky learning and distance metric analysis, to polarized hyperspectral imagery, suggest that polarization filters provide a slight advantage in image segmentation. The results of this work have implications for understanding the presence of explosive material in dry, desert areas where reflective glare is a significant impediment to scene characterization.

Keywords: explosive material, hyperspectral imagery, image segmentation, machine learning, polarization

Procedia PDF Downloads 140