Search results for: detection limit
89 Considerations for Effectively Using Probability of Failure as a Means of Slope Design Appraisal for Homogeneous and Heterogeneous Rock Masses
Authors: Neil Bar, Andrew Heweston
Abstract:
Probability of failure (PF) often appears alongside factor of safety (FS) in design acceptance criteria for rock slope, underground excavation and open pit mine designs. However, the design acceptance criteria generally provide no guidance relating to how PF should be calculated for homogeneous and heterogeneous rock masses, or what qualifies a ‘reasonable’ PF assessment for a given slope design. Observational and kinematic methods were widely used in the 1990s until advances in computing permitted the routine use of numerical modelling. In the 2000s and early 2010s, PF in numerical models was generally calculated using the point estimate method. More recently, some limit equilibrium analysis software offer statistical parameter inputs along with Monte-Carlo or Latin-Hypercube sampling methods to automatically calculate PF. Factors including rock type and density, weathering and alteration, intact rock strength, rock mass quality and shear strength, the location and orientation of geologic structure, shear strength of geologic structure and groundwater pore pressure influence the stability of rock slopes. Significant engineering and geological judgment, interpretation and data interpolation is usually applied in determining these factors and amalgamating them into a geotechnical model which can then be analysed. Most factors are estimated ‘approximately’ or with allowances for some variability rather than ‘exactly’. When it comes to numerical modelling, some of these factors are then treated deterministically (i.e. as exact values), while others have probabilistic inputs based on the user’s discretion and understanding of the problem being analysed. This paper discusses the importance of understanding the key aspects of slope design for homogeneous and heterogeneous rock masses and how they can be translated into reasonable PF assessments where the data permits. A case study from a large open pit gold mine in a complex geological setting in Western Australia is presented to illustrate how PF can be calculated using different methods and obtain markedly different results. Ultimately sound engineering judgement and logic is often required to decipher the true meaning and significance (if any) of some PF results.
Keywords: Probability of failure, point estimate method, Monte-Carlo simulations, sensitivity analysis, slope stability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 119688 Detecting Financial Bubbles Using Gap between Common Stocks and Preferred Stocks
Authors: Changju Lee, Seungmo Ku, Sondo Kim, Woojin Chang
Abstract:
How to detecting financial bubble? Addressing this simple question has been the focus of a vast amount of empirical research spanning almost half a century. However, financial bubble is hard to observe and varying over the time; there needs to be more research on this area. In this paper, we used abnormal difference between common stocks price and those preferred stocks price to explain financial bubble. First, we proposed the ‘W-index’ which indicates spread between common stocks and those preferred stocks in stock market. Second, to prove that this ‘W-index’ is valid for measuring financial bubble, we showed that there is an inverse relationship between this ‘W-index’ and S&P500 rate of return. Specifically, our hypothesis is that when ‘W-index’ is comparably higher than other periods, financial bubbles are added up in stock market and vice versa; according to our hypothesis, if investors made long term investments when ‘W-index’ is high, they would have negative rate of return; however, if investors made long term investments when ‘W-index’ is low, they would have positive rate of return. By comparing correlation values and adjusted R-squared values of between W-index and S&P500 return, VIX index and S&P500 return, and TED index and S&P500 return, we showed only W-index has significant relationship between S&P500 rate of return. In addition, we figured out how long investors should hold their investment position regard the effect of financial bubble. Using this W-index, investors could measure financial bubble in the market and invest with low risk.
Keywords: Financial bubbles, detection, preferred stocks, pairs trading, future return, forecast.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 113087 The Ballistics Case Study of the Enrica Lexie Incident
Authors: Diego Abbo
Abstract:
On February 15, 2012 off the Indian coast of Kerala, in position 091702N-0760180E by the oil tanker Enrica Lexie, flying the Italian flag, bursts of 5.56 x45 caliber shots were fired from assault rifles AR/70 Italian-made Beretta towards the Indian fisher boat St. Anthony. The shots that hit the St. Anthony fishing boat were six, of which two killed the Indian fishermen Ajesh Pink and Valentine Jelestine. From the analysis concerning the kinematic engagement of the two ships and from the autopsy and ballistic results of the Indian judicial authorities it is possible to reconstruct the trajectories of the six aforementioned shots. This essay reconstructs the trajectories of the six shots that cannot be of direct shooting but have undergone a rebound on the water. The investigation carried out scientifically demonstrates the rebound of the blows on the water, the gyrostatic deviation due to the rebound and the tumbling effect always due to the rebound as regards intermediate ballistics. In consideration of the four shots that directly impacted the fishing vessel, the current examination proves, with scientific value, that the trajectories could not be downwards but upwards. Also, the trajectory of two shots that hit to death the two fishermen could not be downwards but only upwards. In fact, this paper demonstrates, with scientific value: The loss of speed of the projectiles due to the rebound on the water; The tumbling effect in the ballistic medium within the two victims; The permanent cavities subject to the injury ballistics and the related ballistic trauma that prevented homeostasis causing bleeding in one case; The thermo-hardening deformation of the bullet found in Valentine Jelestine's skull; The upward and non-downward trajectories. The paper constitutes a tool in forensic ballistics in that it manages to reconstruct, from the final spot of the projectiles fired, all phases of ballistics like the internal one of the weapons that fired, the intermediate one, the terminal one and the penetrative structural one. In general terms the ballistics reconstruction is based on measurable parameters whose entity is contained with certainty within a lower and upper limit. Therefore, quantities that refer to angles, speed, impact energy and firing position of the shooter can be identified within the aforementioned limits. Finally, the investigation into the internal bullet track, obtained from any autopsy examination, offers a significant “lesson learned” but overall a starting point to contain or mitigate bleeding as a rescue from future gunshot wounds.
Keywords: Impact physics, intermediate ballistics, terminal ballistics, tumbling effect.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 83586 A New Multi-Target, Multi-Agent Search-and-Rescue Path Planning Approach
Authors: Jean Berger, Nassirou Lo, Martin Noel
Abstract:
Perfectly suited for natural or man-made emergency and disaster management situations such as flood, earthquakes, tornadoes, or tsunami, multi-target search path planning for a team of rescue agents is known to be computationally hard, and most techniques developed so far come short to successfully estimate optimality gap. A novel mixed-integer linear programming (MIP) formulation is proposed to optimally solve the multi-target multi-agent discrete search and rescue (SAR) path planning problem. Aimed at maximizing cumulative probability of successful target detection, it captures anticipated feedback information associated with possible observation outcomes resulting from projected path execution, while modeling agent discrete actions over all possible moving directions. Problem modeling further takes advantage of network representation to encompass decision variables, expedite compact constraint specification, and lead to substantial problem-solving speed-up. The proposed MIP approach uses CPLEX optimization machinery, efficiently computing near-optimal solutions for practical size problems, while giving a robust upper bound obtained from Lagrangean integrality constraint relaxation. Should eventually a target be positively detected during plan execution, a new problem instance would simply be reformulated from the current state, and then solved over the next decision cycle. A computational experiment shows the feasibility and the value of the proposed approach.
Keywords: Search path planning, search and rescue, multi-agent, mixed-integer linear programming, optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 247985 Robotics and Embedded Systems Applied to the Buried Pipeline Inspection
Authors: Robson C. Santos, Julio C. P. Ribeiro, Iorran M. de Castro, Luan C. F. Rodrigues, Sandro R. L. Silva, Diego M. Quesada
Abstract:
The work aims to develop a robot in the form of autonomous vehicle to detect, inspection and mapping of underground pipelines through the ATmega328 Arduino platform. Hardware prototyping is very similar to C / C ++ language that facilitates its use in robotics open source, resembles PLC used in large industrial processes. The robot will traverse the surface independently of direct human action, in order to automate the process of detecting buried pipes, guided by electromagnetic induction. The induction comes from coils that send the signal to the Arduino microcontroller contained in that will make the difference in intensity and the treatment of the information, and then this determines actions to electrical components such as relays and motors, allowing the prototype to move on the surface and getting the necessary information. This change of direction is performed by a stepper motor with a servo motor. The robot was developed by electrical and electronic assemblies that allowed test your application. The assembly is made up of metal detector coils, circuit boards and microprocessor, which interconnected circuits previously developed can determine, process control and mechanical actions for a robot (autonomous car) that will make the detection and mapping of buried pipelines plates. This type of prototype can prevent and identifies possible landslides and they can prevent the buried pipelines suffer an external pressure on the walls with the possibility of oil leakage and thus pollute the environment.Keywords: Robotic, metal detector, embedded system, pipeline.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 215984 Verification of On-Line Vehicle Collision Avoidance Warning System using DSRC
Authors: C. W. Hsu, C. N. Liang, L. Y. Ke, F. Y. Huang
Abstract:
Many accidents were happened because of fast driving, habitual working overtime or tired spirit. This paper presents a solution of remote warning for vehicles collision avoidance using vehicular communication. The development system integrates dedicated short range communication (DSRC) and global position system (GPS) with embedded system into a powerful remote warning system. To transmit the vehicular information and broadcast vehicle position; DSRC communication technology is adopt as the bridge. The proposed system is divided into two parts of the positioning andvehicular units in a vehicle. The positioning unit is used to provide the position and heading information from GPS module, and furthermore the vehicular unit is used to receive the break, throttle, and othersignals via controller area network (CAN) interface connected to each mechanism. The mobile hardware are built with an embedded system using X86 processor in Linux system. A vehicle is communicated with other vehicles via DSRC in non-addressed protocol with wireless access in vehicular environments (WAVE) short message protocol. From the position data and vehicular information, this paper provided a conflict detection algorithm to do time separation and remote warning with error bubble consideration. And the warning information is on-line displayed in the screen. This system is able to enhance driver assistance service and realize critical safety by using vehicular information from the neighbor vehicles.KeywordsDedicated short range communication, GPS, Control area network, Collision avoidance warning system.
Keywords: Dedicated short range communication, GPS, Control area network, Collision avoidance warning system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 220583 An Improved Total Variation Regularization Method for Denoising Magnetocardiography
Authors: Yanping Liao, Congcong He, Ruigang Zhao
Abstract:
The application of magnetocardiography signals to detect cardiac electrical function is a new technology developed in recent years. The magnetocardiography signal is detected with Superconducting Quantum Interference Devices (SQUID) and has considerable advantages over electrocardiography (ECG). It is difficult to extract Magnetocardiography (MCG) signal which is buried in the noise, which is a critical issue to be resolved in cardiac monitoring system and MCG applications. In order to remove the severe background noise, the Total Variation (TV) regularization method is proposed to denoise MCG signal. The approach transforms the denoising problem into a minimization optimization problem and the Majorization-minimization algorithm is applied to iteratively solve the minimization problem. However, traditional TV regularization method tends to cause step effect and lacks constraint adaptability. In this paper, an improved TV regularization method for denoising MCG signal is proposed to improve the denoising precision. The improvement of this method is mainly divided into three parts. First, high-order TV is applied to reduce the step effect, and the corresponding second derivative matrix is used to substitute the first order. Then, the positions of the non-zero elements in the second order derivative matrix are determined based on the peak positions that are detected by the detection window. Finally, adaptive constraint parameters are defined to eliminate noises and preserve signal peak characteristics. Theoretical analysis and experimental results show that this algorithm can effectively improve the output signal-to-noise ratio and has superior performance.Keywords: Constraint parameters, derivative matrix, magnetocardiography, regular term, total variation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 69682 Concept of a Pseudo-Lower Bound Solution for Reinforced Concrete Slabs
Authors: M. De Filippo, J. S. Kuang
Abstract:
In construction industry, reinforced concrete (RC) slabs represent fundamental elements of buildings and bridges. Different methods are available for analysing the structural behaviour of slabs. In the early ages of last century, the yield-line method has been proposed to attempt to solve such problem. Simple geometry problems could easily be solved by using traditional hand analyses which include plasticity theories. Nowadays, advanced finite element (FE) analyses have mainly found their way into applications of many engineering fields due to the wide range of geometries to which they can be applied. In such cases, the application of an elastic or a plastic constitutive model would completely change the approach of the analysis itself. Elastic methods are popular due to their easy applicability to automated computations. However, elastic analyses are limited since they do not consider any aspect of the material behaviour beyond its yield limit, which turns to be an essential aspect of RC structural performance. Furthermore, their applicability to non-linear analysis for modeling plastic behaviour gives very reliable results. Per contra, this type of analysis is computationally quite expensive, i.e. not well suited for solving daily engineering problems. In the past years, many researchers have worked on filling this gap between easy-to-implement elastic methods and computationally complex plastic analyses. This paper aims at proposing a numerical procedure, through which a pseudo-lower bound solution, not violating the yield criterion, is achieved. The advantages of moment distribution are taken into account, hence the increase in strength provided by plastic behaviour is considered. The lower bound solution is improved by detecting over-yielded moments, which are used to artificially rule the moment distribution among the rest of the non-yielded elements. The proposed technique obeys Nielsen’s yield criterion. The outcome of this analysis provides a simple, yet accurate, and non-time-consuming tool of predicting the lower-bound solution of the collapse load of RC slabs. By using this method, structural engineers can find the fracture patterns and ultimate load bearing capacity. The collapse triggering mechanism is found by detecting yield-lines. An application to the simple case of a square clamped slab is shown, and a good match was found with the exact values of collapse load.Keywords: Computational mechanics, lower bound method, reinforced concrete slabs, yield-line.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 109481 Detecting Fake News: A Natural Language Processing, Reinforcement Learning, and Blockchain Approach
Authors: Ashly Joseph, Jithu Paulose
Abstract:
In an era where misleading information may quickly circulate on digital news channels, it is crucial to have efficient and trustworthy methods to detect and reduce the impact of misinformation. This research proposes an innovative framework that combines Natural Language Processing (NLP), Reinforcement Learning (RL), and Blockchain technologies to precisely detect and minimize the spread of false information in news articles on social media. The framework starts by gathering a variety of news items from different social media sites and performing preprocessing on the data to ensure its quality and uniformity. NLP methods are utilized to extract complete linguistic and semantic characteristics, effectively capturing the subtleties and contextual aspects of the language used. These features are utilized as input for a RL model. This model acquires the most effective tactics for detecting and mitigating the impact of false material by modeling the intricate dynamics of user engagements and incentives on social media platforms. The integration of blockchain technology establishes a decentralized and transparent method for storing and verifying the accuracy of information. The Blockchain component guarantees the unchangeability and safety of verified news records, while encouraging user engagement for detecting and fighting false information through an incentive system based on tokens. The suggested framework seeks to provide a thorough and resilient solution to the problems presented by misinformation in social media articles.
Keywords: Natural Language Processing, Reinforcement Learning, Blockchain, fake news mitigation, misinformation detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8280 Design Charts for Strip Footing on Untreated and Cement Treated Sand Mat over Underlying Natural Soft Clay
Authors: Sharifullah Ahmed, Sarwar Jahan Md. Yasin
Abstract:
Shallow foundations on unimproved soft natural soils can undergo a high consolidation and secondary settlement. For low and medium rise building projects on such soil condition, pile foundation may not be cost effective. In such cases an alternative to pile foundations may be shallow strip footings placed on a double layered improved soil system soil. The upper layer of this system is untreated or cement treated compacted sand and underlying layer is natural soft clay. This system will reduce the settlement to an allowable limit. The current research has been conducted with the settlement of a rigid plane-strain strip footing of 2.5 m width placed on the surface of a soil consisting of an untreated or cement treated sand layer overlying a bed of homogeneous soft clay. The settlement of the mentioned shallow foundation has been studied considering both cases with the thicknesses of the sand layer are 0.3 to 0.9 times the width of footing. The response of the clay layer is assumed as undrained for plastic loading stages and drained during consolidation stages. The response of the sand layer is drained during all loading stages. FEM analysis was done using PLAXIS 2D Version 8.0. A natural clay deposit of 15 m thickness and 18 m width has been modeled using Hardening Soil Model, Soft Soil Model, Soft Soil Creep Model, and upper improvement layer has been modeled using only Hardening Soil Model. The groundwater level is at the top level of the clay deposit that made the system fully saturated. Parametric study has been conducted to determine the effect of thickness, density, cementation of the sand mat and density, shear strength of the soft clay layer on the settlement of strip foundation under the uniformly distributed vertical load of varying value. A set of the chart has been established for designing shallow strip footing on the sand mat over thick, soft clay deposit through obtaining the particular thickness of sand mat for particular subsoil parameter to ensure no punching shear failure and no settlement beyond allowable level. Design guideline in the form of non-dimensional charts has been developed for footing pressure equivalent to medium-rise residential or commercial building foundation with strip footing on soft inorganic Normally Consolidated (NC) soil of Bangladesh having void ratio from 1.0 to 1.45.
Keywords: Design charts, ground improvement, PLAXIS 2D, primary and secondary settlement, sand Mat, soft clay.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 66979 Analysis of Seismic Waves Generated by Blasting Operations and their Response on Buildings
Authors: S. Ziaran, M. Musil, M. Cekan, O. Chlebo
Abstract:
The paper analyzes the response of buildings and industrially structures on seismic waves (low frequency mechanical vibration) generated by blasting operations. The principles of seismic analysis can be applied for different kinds of excitation such as: earthquakes, wind, explosions, random excitation from local transportation, periodic excitation from large rotating and/or machines with reciprocating motion, metal forming processes such as forging, shearing and stamping, chemical reactions, construction and earth moving work, and other strong deterministic and random energy sources caused by human activities. The article deals with the response of seismic, low frequency, mechanical vibrations generated by nearby blasting operations on a residential home. The goal was to determine the fundamental natural frequencies of the measured structure; therefore it is important to determine the resonant frequencies to design a suitable modal damping. The article also analyzes the package of seismic waves generated by blasting (Primary waves – P-waves and Secondary waves S-waves) and investigated the transfer regions. For the detection of seismic waves resulting from an explosion, the Fast Fourier Transform (FFT) and modal analysis, in the frequency domain, is used and the signal was acquired and analyzed also in the time domain. In the conclusions the measured results of seismic waves caused by blasting in a nearby quarry and its effect on a nearby structure (house) is analyzed. The response on the house, including the fundamental natural frequency and possible fatigue damage is also assessed.
Keywords: Building structure, seismic waves, spectral analysis, structural response.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 529678 Speciation, Preconcentration, and Determination of Iron(II) and (III) Using 1,10-Phenanthroline Immobilized on Alumina-Coated Magnetite Nanoparticles as a Solid Phase Extraction Sorbent in Pharmaceutical Products
Authors: Hossein Tavallali, Mohammad Ali Karimi, Gohar Deilamy-Rad
Abstract:
The proposed method for speciation, preconcentration and determination of Fe(II) and Fe(III) in pharmaceutical products was developed using of alumina-coated magnetite nanoparticles (Fe3O4/Al2O3 NPs) as solid phase extraction (SPE) sorbent in magnetic mixed hemimicell solid phase extraction (MMHSPE) technique followed by flame atomic absorption spectrometry analysis. The procedure is based on complexation of Fe(II) with 1, 10-phenanthroline (OP) as complexing reagent for Fe(II) that immobilized on the modified Fe3O4/Al2O3 NPs. The extraction and concentration process for pharmaceutical sample was carried out in a single step by mixing the extraction solvent, magnetic adsorbents under ultrasonic action. Then, the adsorbents were isolated from the complicated matrix easily with an external magnetic field. Fe(III) ions determined after facility reduced to Fe(II) by added a proper reduction agent to sample solutions. Compared with traditional methods, the MMHSPE method simplified the operation procedure and reduced the analysis time. Various influencing parameters on the speciation and preconcentration of trace iron, such as pH, sample volume, amount of sorbent, type and concentration of eluent, were studied. Under the optimized operating conditions, the preconcentration factor of the modified nano magnetite for Fe(II) 167 sample was obtained. The detection limits and linear range of this method for iron were 1.0 and 9.0 - 175 ng.mL−1, respectively. Also the relative standard deviation for five replicate determinations of 30.00 ng.mL-1 Fe2+ was 2.3%.
Keywords: Alumina-coated magnetite nanoparticles, magnetic mixed hemimicell solid-phase extraction, Fe(ΙΙ) and Fe(ΙΙΙ), pharmaceutical sample.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 120777 Elliptical Features Extraction Using Eigen Values of Covariance Matrices, Hough Transform and Raster Scan Algorithms
Authors: J. Prakash, K. Rajesh
Abstract:
In this paper, we introduce a new method for elliptical object identification. The proposed method adopts a hybrid scheme which consists of Eigen values of covariance matrices, Circular Hough transform and Bresenham-s raster scan algorithms. In this approach we use the fact that the large Eigen values and small Eigen values of covariance matrices are associated with the major and minor axial lengths of the ellipse. The centre location of the ellipse can be identified using circular Hough transform (CHT). Sparse matrix technique is used to perform CHT. Since sparse matrices squeeze zero elements and contain a small number of nonzero elements they provide an advantage of matrix storage space and computational time. Neighborhood suppression scheme is used to find the valid Hough peaks. The accurate position of circumference pixels is identified using raster scan algorithm which uses the geometrical symmetry property. This method does not require the evaluation of tangents or curvature of edge contours, which are generally very sensitive to noise working conditions. The proposed method has the advantages of small storage, high speed and accuracy in identifying the feature. The new method has been tested on both synthetic and real images. Several experiments have been conducted on various images with considerable background noise to reveal the efficacy and robustness. Experimental results about the accuracy of the proposed method, comparisons with Hough transform and its variants and other tangential based methods are reported.Keywords: Circular Hough transform, covariance matrix, Eigen values, ellipse detection, raster scan algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 263976 Detection of Transgenes in Cotton (Gossypium hirsutum L.) by Using Biotechnology/Molecular Biological Techniques
Authors: Ahmad Ali Shahid, Muhammad Shakil Shaukat, Kamran Shehzad Bajwa, Abdul Qayyum Rao, Tayyab Husnain
Abstract:
Agriculture is the backbone of economy of Pakistan and cotton is the major agricultural export and supreme source of raw fiber for our textile industry. To combat severe problems of insect and weed, combination of three genes namely Cry1Ac, Cry2A and EPSPS genes was transferred in locally cultivated cotton variety MNH-786 with the use of Agrobacterium mediated genetic transformation. The present study focused on the molecular screening of transgenic cotton plants at T3 generation in order to confirm integration and expression of all three genes (Cry1Ac, Cry2A and EPSP synthase) into the cotton genome. Initially, glyphosate spray assay was used for screening of transgenic cotton plants containing EPSP synthase gene at T3 generation. Transgenic cotton plants which were healthy and showed no damage on leaves were selected after 07 days of spray. For molecular analysis of transgenic cotton plants in the laboratory, the genomic DNA of these transgenic cotton plants were isolated and subjected to amplification of the three genes. Thus, seventeen out of twenty (Cry1Ac gene), ten out of twenty (Cry2A gene) and all twenty (EPSP synthase gene) were produced positive amplification. On the base of PCR amplification, ten transgenic plant samples were subjected to protein expression analysis through ELISA. The results showed that eight out of ten plants were actively expressing the three transgenes. Real-time PCR was also done to quantify the mRNA expression levels of Cry1Ac and EPSP synthase gene. Finally, eight plants were confirmed for the presence and active expression of all three genes at T3 generation.
Keywords: Agriculture, Cotton, Transformation, Cry Genes, ELISA and PCR.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 313575 Development and Validation of a UPLC Method for the Determination of Albendazole Residues on Pharmaceutical Manufacturing Equipment Surfaces
Authors: R. S. Chandan, M. Vasudevan, Deecaraman, B. M. Gurupadayya
Abstract:
In Pharmaceutical industries, it is very important to remove drug residues from the equipment and areas used. The cleaning procedure must be validated, so special attention must be devoted to the methods used for analysis of trace amounts of drugs. A rapid, sensitive and specific reverse phase ultra performance liquid chromatographic (UPLC) method was developed for the quantitative determination of Albendazole in cleaning validation swab samples. The method was validated using an ACQUITY HSS C18, 50 x 2.1mm, 1.8μ column with a isocratic mobile phase containing a mixture of 1.36g of Potassium dihydrogenphosphate in 1000mL MilliQ water, 2mL of triethylamine and pH adjusted to 2.3 ± 0.05 with ortho-phosphoric acid, Acetonitrile and Methanol (50:40:10 v/v). The flow rate of the mobile phase was 0.5 mL min-1 with a column temperature of 350C and detection wavelength at 254nm using PDA detector. The injection volume was 2µl. Cotton swabs, moisten with acetonitrile were used to remove any residue of drug from stainless steel, teflon, rubber and silicon plates which mimic the production equipment surface and the mean extraction-recovery was found to be 91.8. The selected chromatographic condition was found to effectively elute Albendazole with retention time of 0.67min. The proposed method was found to be linear over the range of 0.2 to 150µg/mL and correlation coefficient obtained is 0.9992. The proposed method was found to be accurate, precise, reproducible and specific and it can also be used for routine quality control analysis of these drugs in biological samples either alone or in combined pharmaceutical dosage forms.
Keywords: Cleaning validation, Albendazole, residues, swab analysis, UPLC.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 310474 Autonomous Robots- Visual Perception in Underground Terrains Using Statistical Region Merging
Authors: Omowunmi E. Isafiade, Isaac O. Osunmakinde, Antoine B. Bagula
Abstract:
Robots- visual perception is a field that is gaining increasing attention from researchers. This is partly due to emerging trends in the commercial availability of 3D scanning systems or devices that produce a high information accuracy level for a variety of applications. In the history of mining, the mortality rate of mine workers has been alarming and robots exhibit a great deal of potentials to tackle safety issues in mines. However, an effective vision system is crucial to safe autonomous navigation in underground terrains. This work investigates robots- perception in underground terrains (mines and tunnels) using statistical region merging (SRM) model. SRM reconstructs the main structural components of an imagery by a simple but effective statistical analysis. An investigation is conducted on different regions of the mine, such as the shaft, stope and gallery, using publicly available mine frames, with a stream of locally captured mine images. An investigation is also conducted on a stream of underground tunnel image frames, using the XBOX Kinect 3D sensors. The Kinect sensors produce streams of red, green and blue (RGB) and depth images of 640 x 480 resolution at 30 frames per second. Integrating the depth information to drivability gives a strong cue to the analysis, which detects 3D results augmenting drivable and non-drivable regions in 2D. The results of the 2D and 3D experiment with different terrains, mines and tunnels, together with the qualitative and quantitative evaluation, reveal that a good drivable region can be detected in dynamic underground terrains.Keywords: Drivable Region Detection, Kinect Sensor, Robots' Perception, SRM, Underground Terrains.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 183673 An Integrated Approach to Child Care Earthquake Preparedness through “Telemachus” Project
Authors: A. Kourou, S. Kyriakopoulos, N. Anyfanti
Abstract:
A lot of children under the age of five spend their daytime hours away from their home, in a kindergarten. Caring for children is a serious subject, and their safety in case of earthquake is the first priority. Being aware of earthquakes helps to prioritize the needs and take the appropriate actions to limit the effects. Earthquakes occurring anywhere at any time require emergency planning. Earthquake planning is a cooperative effort and childcare providers have unique roles and responsibilities. Greece has high seismicity and Ionian Islands Region has the highest seismic activity of the country. Earthquake Planning and Protection Organization (EPPO) is a national organization in Greece. The mission of EPPO is the seismic risk reduction by designing an earthquake management program of mitigation and preparedness. Among other actions EPPO has analyzed the needs and requirements of kindergartens on earthquake protection issues and has designed specific activities to familiarize the day care centers staff being prepared for earthquakes. This research presents the results of a survey that detects the level of earthquake preparedness of kindergartens in all over the country and Ionian Islands too. A closed-form questionnaire of 20 main questions was developed for the survey in order to detect the aspects of participants concerning the earthquake preparedness actions at individual, family and day care environment level. 2668 questionnaires were gathered from March 2014 to May 2019, and analyzed by EPPO’s Department of Education. Moreover, this paper presents the EPPO’s educational activities targeted to the Ionian Islands Region that implemented in the framework of “Telemachus” Project. To provide safe environment for children to learn, and staff to work is the foremost goal of any State, community and kindergarten. This project is funded under the Priority Axis “Environmental Protection and Sustainable Development” of Operational Plan “Ionian Islands 2014-2020”. It is increasingly accepted that emergency preparedness should be thought of as an ongoing process rather than a one-time activity. Creating an earthquake safe daycare environment that facilitates learning is a challenging task. Training, drills, and update of emergency plan should take place throughout the year at kindergartens to identify any gaps and to ensure the emergency procedures. EPPO will continue to work closely with regional and local authorities to actively address the needs of children and kindergartens before, during and after earthquakes.
Keywords: Child care centers, education on earthquake issues, emergency planning, Ionian Islands Region of Greece, kindergartens, preparedness.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 46372 Development of an Ensemble Classification Model Based on Hybrid Filter-Wrapper Feature Selection for Email Phishing Detection
Authors: R. B. Ibrahim, M. S. Argungu, I. M. Mungadi
Abstract:
It is obvious in this present time, internet has become an indispensable part of human life since its inception. The Internet has provided diverse opportunities to make life so easy for human beings, through the adoption of various channels. Among these channels are email, internet banking, video conferencing, and the like. Email is one of the easiest means of communication hugely accepted among individuals and organizations globally. But over decades the security integrity of this platform has been challenged with malicious activities like Phishing. Email phishing is designed by phishers to fool the recipient into handing over sensitive personal information such as passwords, credit card numbers, account credentials, social security numbers, etc. This activity has caused a lot of financial damage to email users globally which has resulted in bankruptcy, sudden death of victims, and other health-related sicknesses. Although many methods have been proposed to detect email phishing, in this research, the results of multiple machine-learning methods for predicting email phishing have been compared with the use of filter-wrapper feature selection. It is worth noting that all three models performed substantially but one outperformed the other. The dataset used for these models is obtained from Kaggle online data repository, while three classifiers: decision tree, Naïve Bayes, and Logistic regression are ensemble (Bagging) respectively. Results from the study show that the Decision Tree (CART) bagging ensemble recorded the highest accuracy of 98.13% using PEF (Phishing Essential Features). This result further demonstrates the dependability of the proposed model.
Keywords: Ensemble, hybrid, filter-wrapper, phishing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17571 Stereo Motion Tracking
Authors: Yudhajit Datta, Jonathan Bandi, Ankit Sethia, Hamsi Iyer
Abstract:
Motion Tracking and Stereo Vision are complicated, albeit well-understood problems in computer vision. Existing softwares that combine the two approaches to perform stereo motion tracking typically employ complicated and computationally expensive procedures. The purpose of this study is to create a simple and effective solution capable of combining the two approaches. The study aims to explore a strategy to combine the two techniques of two-dimensional motion tracking using Kalman Filter; and depth detection of object using Stereo Vision. In conventional approaches objects in the scene of interest are observed using a single camera. However for Stereo Motion Tracking; the scene of interest is observed using video feeds from two calibrated cameras. Using two simultaneous measurements from the two cameras a calculation for the depth of the object from the plane containing the cameras is made. The approach attempts to capture the entire three-dimensional spatial information of each object at the scene and represent it through a software estimator object. In discrete intervals, the estimator tracks object motion in the plane parallel to plane containing cameras and updates the perpendicular distance value of the object from the plane containing the cameras as depth. The ability to efficiently track the motion of objects in three-dimensional space using a simplified approach could prove to be an indispensable tool in a variety of surveillance scenarios. The approach may find application from high security surveillance scenes such as premises of bank vaults, prisons or other detention facilities; to low cost applications in supermarkets and car parking lots.
Keywords: Kalman Filter, Stereo Vision, Motion Tracking, Matlab, Object Tracking, Camera Calibration, Computer Vision System Toolbox.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 282170 Analysis of Driver Point of Regard Determinations with Eye-Gesture Templates Using Receiver Operating Characteristic
Authors: Siti Nor Hafizah binti Mohd Zaid, Mohamed Abdel-Maguid, Abdel-Hamid Soliman
Abstract:
An Advance Driver Assistance System (ADAS) is a computer system on board a vehicle which is used to reduce the risk of vehicular accidents by monitoring factors relating to the driver, vehicle and environment and taking some action when a risk is identified. Much work has been done on assessing vehicle and environmental state but there is still comparatively little published work that tackles the problem of driver state. Visual attention is one such driver state. In fact, some researchers claim that lack of attention is the main cause of accidents as factors such as fatigue, alcohol or drug use, distraction and speeding all impair the driver-s capacity to pay attention to the vehicle and road conditions [1]. This seems to imply that the main cause of accidents is inappropriate driver behaviour in cases where the driver is not giving full attention while driving. The work presented in this paper proposes an ADAS system which uses an image based template matching algorithm to detect if a driver is failing to observe particular windscreen cells. This is achieved by dividing the windscreen into 24 uniform cells (4 rows of 6 columns) and matching video images of the driver-s left eye with eye-gesture templates drawn from images of the driver looking at the centre of each windscreen cell. The main contribution of this paper is to assess the accuracy of this approach using Receiver Operating Characteristic analysis. The results of our evaluation give a sensitivity value of 84.3% and a specificity value of 85.0% for the eye-gesture template approach indicating that it may be useful for driver point of regard determinations.
Keywords: Advanced Driver Assistance Systems, Eye-Tracking, Hazard Detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 163169 Space Telemetry Anomaly Detection Based on Statistical PCA Algorithm
Authors: B. Nassar, W. Hussein, M. Mokhtar
Abstract:
The critical concern of satellite operations is to ensure the health and safety of satellites. The worst case in this perspective is probably the loss of a mission, but the more common interruption of satellite functionality can result in compromised mission objectives. All the data acquiring from the spacecraft are known as Telemetry (TM), which contains the wealth information related to the health of all its subsystems. Each single item of information is contained in a telemetry parameter, which represents a time-variant property (i.e. a status or a measurement) to be checked. As a consequence, there is a continuous improvement of TM monitoring systems to reduce the time required to respond to changes in a satellite's state of health. A fast conception of the current state of the satellite is thus very important to respond to occurring failures. Statistical multivariate latent techniques are one of the vital learning tools that are used to tackle the problem above coherently. Information extraction from such rich data sources using advanced statistical methodologies is a challenging task due to the massive volume of data. To solve this problem, in this paper, we present a proposed unsupervised learning algorithm based on Principle Component Analysis (PCA) technique. The algorithm is particularly applied on an actual remote sensing spacecraft. Data from the Attitude Determination and Control System (ADCS) was acquired under two operation conditions: normal and faulty states. The models were built and tested under these conditions, and the results show that the algorithm could successfully differentiate between these operations conditions. Furthermore, the algorithm provides competent information in prediction as well as adding more insight and physical interpretation to the ADCS operation.Keywords: Space telemetry monitoring, multivariate analysis, PCA algorithm, space operations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 206168 Linguistic, Pragmatic and Evolutionary Factors in Wason Selection Task
Authors: Olimpia Matarazzo, Fabrizio Ferrara
Abstract:
In two studies we tested the hypothesis that the appropriate linguistic formulation of a deontic rule – i.e. the formulation which clarifies the monadic nature of deontic operators - should produce more correct responses than the conditional formulation in Wason selection task. We tested this assumption by presenting a prescription rule and a prohibition rule in conditional vs. proper deontic formulation. We contrasted this hypothesis with two other hypotheses derived from social contract theory and relevance theory. According to the first theory, a deontic rule expressed in terms of cost-benefit should elicit a cheater detection module, sensible to mental states attributions and thus able to discriminate intentional rule violations from accidental rule violations. We tested this prevision by distinguishing the two types of violations. According to relevance theory, performance in selection task should improve by increasing cognitive effect and decreasing cognitive effort. We tested this prevision by focusing experimental instructions on the rule vs. the action covered by the rule. In study 1, in which 480 undergraduates participated, we tested these predictions through a 2 x 2 x 2 x 2 (type of the rule x rule formulation x type of violation x experimental instructions) between-subjects design. In study 2 – carried out by means of a 2 x 2 (rule formulation x type of violation) between-subjects design - we retested the hypothesis of rule formulation vs. the cheaterdetection hypothesis through a new version of selection task in which intentional vs. accidental rule violations were better discriminated. 240 undergraduates participated in this study. Results corroborate our hypothesis and challenge the contrasting assumptions. However, they show that the conditional formulation of deontic rules produces a lower performance than what is reported in literature.Keywords: Deontic reasoning; Evolutionary, linguistic, logical, pragmatic factors; Wason selection task
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 160967 A Study of RSCMAC Enhanced GPS Dynamic Positioning
Authors: Ching-Tsan Chiang, Sheng-Jie Yang, Jing-Kai Huang
Abstract:
The purpose of this research is to develop and apply the RSCMAC to enhance the dynamic accuracy of Global Positioning System (GPS). GPS devices provide services of accurate positioning, speed detection and highly precise time standard for over 98% area on the earth. The overall operation of Global Positioning System includes 24 GPS satellites in space; signal transmission that includes 2 frequency carrier waves (Link 1 and Link 2) and 2 sets random telegraphic codes (C/A code and P code), on-earth monitoring stations or client GPS receivers. Only 4 satellites utilization, the client position and its elevation can be detected rapidly. The more receivable satellites, the more accurate position can be decoded. Currently, the standard positioning accuracy of the simplified GPS receiver is greatly increased, but due to affected by the error of satellite clock, the troposphere delay and the ionosphere delay, current measurement accuracy is in the level of 5~15m. In increasing the dynamic GPS positioning accuracy, most researchers mainly use inertial navigation system (INS) and installation of other sensors or maps for the assistance. This research utilizes the RSCMAC advantages of fast learning, learning convergence assurance, solving capability of time-related dynamic system problems with the static positioning calibration structure to improve and increase the GPS dynamic accuracy. The increasing of GPS dynamic positioning accuracy can be achieved by using RSCMAC system with GPS receivers collecting dynamic error data for the error prediction and follows by using the predicted error to correct the GPS dynamic positioning data. The ultimate purpose of this research is to improve the dynamic positioning error of cheap GPS receivers and the economic benefits will be enhanced while the accuracy is increased.Keywords: Dynamic Error, GPS, Prediction, RSCMAC.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 168466 Progressive AAM Based Robust Face Alignment
Authors: Daehwan Kim, Jaemin Kim, Seongwon Cho, Yongsuk Jang, Sun-Tae Chung, Boo-Gyoun Kim
Abstract:
AAM has been successfully applied to face alignment, but its performance is very sensitive to initial values. In case the initial values are a little far distant from the global optimum values, there exists a pretty good possibility that AAM-based face alignment may converge to a local minimum. In this paper, we propose a progressive AAM-based face alignment algorithm which first finds the feature parameter vector fitting the inner facial feature points of the face and later localize the feature points of the whole face using the first information. The proposed progressive AAM-based face alignment algorithm utilizes the fact that the feature points of the inner part of the face are less variant and less affected by the background surrounding the face than those of the outer part (like the chin contour). The proposed algorithm consists of two stages: modeling and relation derivation stage and fitting stage. Modeling and relation derivation stage first needs to construct two AAM models: the inner face AAM model and the whole face AAM model and then derive relation matrix between the inner face AAM parameter vector and the whole face AAM model parameter vector. In the fitting stage, the proposed algorithm aligns face progressively through two phases. In the first phase, the proposed algorithm will find the feature parameter vector fitting the inner facial AAM model into a new input face image, and then in the second phase it localizes the whole facial feature points of the new input face image based on the whole face AAM model using the initial parameter vector estimated from using the inner feature parameter vector obtained in the first phase and the relation matrix obtained in the first stage. Through experiments, it is verified that the proposed progressive AAM-based face alignment algorithm is more robust with respect to pose, illumination, and face background than the conventional basic AAM-based face alignment algorithm.Keywords: Face Alignment, AAM, facial feature detection, model matching.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 163865 Data Privacy and Safety with Large Language Models
Authors: Ashly Joseph, Jithu Paulose
Abstract:
Large language models (LLMs) have revolutionized natural language processing capabilities, enabling applications such as chatbots, dialogue agents, image, and video generators. Nevertheless, their trainings on extensive datasets comprising personal information poses notable privacy and safety hazards. This study examines methods for addressing these challenges, specifically focusing on approaches to enhance the security of LLM outputs, safeguard user privacy, and adhere to data protection rules. We explore several methods including post-processing detection algorithms, content filtering, reinforcement learning from human and AI inputs, and the difficulties in maintaining a balance between model safety and performance. The study also emphasizes the dangers of unintentional data leakage, privacy issues related to user prompts, and the possibility of data breaches. We highlight the significance of corporate data governance rules and optimal methods for engaging with chatbots. In addition, we analyze the development of data protection frameworks, evaluate the adherence of LLMs to General Data Protection Regulation (GDPR), and examine privacy legislation in academic and business policies. We demonstrate the difficulties and remedies involved in preserving data privacy and security in the age of sophisticated artificial intelligence by employing case studies and real-life instances. This article seeks to educate stakeholders on practical strategies for improving the security and privacy of LLMs, while also assuring their responsible and ethical implementation.
Keywords: Data privacy, large language models, artificial intelligence, machine learning, cybersecurity, general data protection regulation, data safety.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10164 Lamb Wave Wireless Communication in Healthy Plates Using Coherent Demodulation
Authors: Rudy Bahouth, Farouk Benmeddour, Emmanuel Moulin, Jamal Assaad
Abstract:
Guided ultrasonic waves are used in Non-Destructive Testing and Structural Health Monitoring for inspection and damage detection. Recently, wireless data transmission using ultrasonic waves in solid metallic channels has gained popularity in some industrial applications such as nuclear, aerospace and smart vehicles. The idea is to find a good substitute for electromagnetic waves since they are highly attenuated near metallic components due to Faraday shielding. The proposed solution is to use ultrasonic guided waves such as Lamb waves as an information carrier due to their capability of propagation for long distances. In addition to this, valuable information about the health of the structure could be extracted simultaneously. In this work, the reliable frequency bandwidth for communication is extracted experimentally from dispersion curves at first. Then, an experimental platform for wireless communication using Lamb waves is described and built. After this, coherent demodulation algorithm used in telecommunications is tested for Amplitude Shift Keying, On-Off Keying and Binary Phase Shift Keying modulation techniques. Signal processing parameters such as threshold choice, number of cycles per bit and Bit Rate are optimized. Experimental results are compared based on the average bit error percentage. Results has shown high sensitivity to threshold selection for Amplitude Shift Keying and On-Off Keying techniques resulting a Bit Rate decrease. Binary Phase Shift Keying technique shows the highest stability and data rate between all tested modulation techniques.
Keywords: Lamb Wave Communication, wireless communication, coherent demodulation, bit error percentage.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 56063 Multipath Routing Protocol Using Basic Reconstruction Routing (BRR) Algorithm in Wireless Sensor Network
Authors: K. Rajasekaran, Kannan Balasubramanian
Abstract:
A sensory network consists of multiple detection locations called sensor nodes, each of which is tiny, featherweight and portable. A single path routing protocols in wireless sensor network can lead to holes in the network, since only the nodes present in the single path is used for the data transmission. Apart from the advantages like reduced computation, complexity and resource utilization, there are some drawbacks like throughput, increased traffic load and delay in data delivery. Therefore, multipath routing protocols are preferred for WSN. Distributing the traffic among multiple paths increases the network lifetime. We propose a scheme, for the data to be transmitted through a dominant path to save energy. In order to obtain a high delivery ratio, a basic route reconstruction protocol is utilized to reconstruct the path whenever a failure is detected. A basic reconstruction routing (BRR) algorithm is proposed, in which a node can leap over path failure by using the already existing routing information from its neighbourhood while the composed data is transmitted from the source to the sink. In order to save the energy and attain high data delivery ratio, data is transmitted along a multiple path, which is achieved by BRR algorithm whenever a failure is detected. Further, the analysis of how the proposed protocol overcomes the drawback of the existing protocols is presented. The performance of our protocol is compared to AOMDV and energy efficient node-disjoint multipath routing protocol (EENDMRP). The system is implemented using NS-2.34. The simulation results show that the proposed protocol has high delivery ratio with low energy consumption.Keywords: Multipath routing, WSN, energy efficient routing, alternate route, assured data delivery.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 172162 Emotion Detection in Twitter Messages Using Combination of Long Short-Term Memory and Convolutional Deep Neural Networks
Authors: B. Golchin, N. Riahi
Abstract:
One of the most significant issues as attended a lot in recent years is that of recognizing the sentiments and emotions in social media texts. The analysis of sentiments and emotions is intended to recognize the conceptual information such as the opinions, feelings, attitudes and emotions of people towards the products, services, organizations, people, topics, events and features in the written text. These indicate the greatness of the problem space. In the real world, businesses and organizations are always looking for tools to gather ideas, emotions, and directions of people about their products, services, or events related to their own. This article uses the Twitter social network, one of the most popular social networks with about 420 million active users, to extract data. Using this social network, users can share their information and opinions about personal issues, policies, products, events, etc. It can be used with appropriate classification of emotional states due to the availability of its data. In this study, supervised learning and deep neural network algorithms are used to classify the emotional states of Twitter users. The use of deep learning methods to increase the learning capacity of the model is an advantage due to the large amount of available data. Tweets collected on various topics are classified into four classes using a combination of two Bidirectional Long Short Term Memory network and a Convolutional network. The results obtained from this study with an average accuracy of 93%, show good results extracted from the proposed framework and improved accuracy compared to previous work.
Keywords: emotion classification, sentiment analysis, social networks, deep neural networks
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 66461 Optimization and Validation for Determination of VOCs from Lime Fruit Citrus aurantifolia (Christm.) with and without California Red Scale Aonidiella aurantii (Maskell) Infested by Using HS-SPME-GC-FID/MS
Authors: K. Mohammed, M. Agarwal, J. Mewman, Y. Ren
Abstract:
An optimum technic has been developed for extracting volatile organic compounds which contribute to the aroma of lime fruit (Citrus aurantifolia). The volatile organic compounds of healthy and infested lime fruit with California red scale Aonidiella aurantii were characterized using headspace solid phase microextraction (HS-SPME) combined with gas chromatography (GC) coupled flame ionization detection (FID) and gas chromatography with mass spectrometry (GC-MS) as a very simple, efficient and nondestructive extraction method. A three-phase 50/30 μm PDV/DVB/CAR fibre was used for the extraction process. The optimal sealing and fibre exposure time for volatiles reaching equilibrium from whole lime fruit in the headspace of the chamber was 16 and 4 hours respectively. 5 min was selected as desorption time of the three-phase fibre. Herbivorous activity induces indirect plant defenses, as the emission of herbivorous-induced plant volatiles (HIPVs), which could be used by natural enemies for host location. GC-MS analysis showed qualitative differences among volatiles emitted by infested and healthy lime fruit. The GC-MS analysis allowed the initial identification of 18 compounds, with similarities higher than 85%, in accordance with the NIST mass spectral library. One of these were increased by A. aurantii infestation, D-limonene, and three were decreased, Undecane, α-Farnesene and 7-epi-α-selinene. From an applied point of view, the application of the above-mentioned VOCs may help boost the efficiency of biocontrol programs and natural enemies’ production techniques.
Keywords: Lime fruit, Citrus aurantifolia, California red scale, Aonidiella aurantii, VOCs, HS-SPME/GC-FID-MS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 85860 Normalizing Flow to Augmented Posterior: Conditional Density Estimation with Interpretable Dimension Reduction for High Dimensional Data
Authors: Cheng Zeng, George Michailidis, Hitoshi Iyatomi, Leo L Duan
Abstract:
The conditional density characterizes the distribution of a response variable y given other predictor x, and plays a key role in many statistical tasks, including classification and outlier detection. Although there has been abundant work on the problem of Conditional Density Estimation (CDE) for a low-dimensional response in the presence of a high-dimensional predictor, little work has been done for a high-dimensional response such as images. The promising performance of normalizing flow (NF) neural networks in unconditional density estimation acts a motivating starting point. In this work, we extend NF neural networks when external x is present. Specifically, they use the NF to parameterize a one-to-one transform between a high-dimensional y and a latent z that comprises two components [zP , zN]. The zP component is a low-dimensional subvector obtained from the posterior distribution of an elementary predictive model for x, such as logistic/linear regression. The zN component is a high-dimensional independent Gaussian vector, which explains the variations in y not or less related to x. Unlike existing CDE methods, the proposed approach, coined Augmented Posterior CDE (AP-CDE), only requires a simple modification on the common normalizing flow framework, while significantly improving the interpretation of the latent component, since zP represents a supervised dimension reduction. In image analytics applications, AP-CDE shows good separation of x-related variations due to factors such as lighting condition and subject id, from the other random variations. Further, the experiments show that an unconditional NF neural network, based on an unsupervised model of z, such as Gaussian mixture, fails to generate interpretable results.
Keywords: Conditional density estimation, image generation, normalizing flow, supervised dimension reduction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 164