Search results for: Jean Pascal Cambronne
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 89

Search results for: Jean Pascal Cambronne

29 Logistic Model Tree and Expectation-Maximization for Pollen Recognition and Grouping

Authors: Endrick Barnacin, Jean-Luc Henry, Jack Molinié, Jimmy Nagau, Hélène Delatte, Gérard Lebreton

Abstract:

Palynology is a field of interest for many disciplines. It has multiple applications such as chronological dating, climatology, allergy treatment, and even honey characterization. Unfortunately, the analysis of a pollen slide is a complicated and time-consuming task that requires the intervention of experts in the field, which is becoming increasingly rare due to economic and social conditions. So, the automation of this task is a necessity. Pollen slides analysis is mainly a visual process as it is carried out with the naked eye. That is the reason why a primary method to automate palynology is the use of digital image processing. This method presents the lowest cost and has relatively good accuracy in pollen retrieval. In this work, we propose a system combining recognition and grouping of pollen. It consists of using a Logistic Model Tree to classify pollen already known by the proposed system while detecting any unknown species. Then, the unknown pollen species are divided using a cluster-based approach. Success rates for the recognition of known species have been achieved, and automated clustering seems to be a promising approach.

Keywords: Pollen recognition, logistic model tree, expectation-maximization, local binary pattern.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 720
28 New Approach for Minimizing Wavelength Fragmentation in Wavelength-Routed WDM Networks

Authors: Sami Baraketi, Jean-Marie Garcia, Olivier Brun

Abstract:

Wavelength Division Multiplexing (WDM) is the dominant transport technology used in numerous high capacity backbone networks, based on optical infrastructures. Given the importance of costs (CapEx and OpEx) associated to these networks, resource management is becoming increasingly important, especially how the optical circuits, called “lightpaths”, are routed throughout the network. This requires the use of efficient algorithms which provide routing strategies with the lowest cost. We focus on the lightpath routing and wavelength assignment problem, known as the RWA problem, while optimizing wavelength fragmentation over the network. Wavelength fragmentation poses a serious challenge for network operators since it leads to the misuse of the wavelength spectrum, and then to the refusal of new lightpath requests. In this paper, we first establish a new Integer Linear Program (ILP) for the problem based on a node-link formulation. This formulation is based on a multilayer approach where the original network is decomposed into several network layers, each corresponding to a wavelength. Furthermore, we propose an efficient heuristic for the problem based on a greedy algorithm followed by a post-treatment procedure. The obtained results show that the optimal solution is often reached. We also compare our results with those of other RWA heuristic methods

Keywords: WDM, lightpath, RWA, wavelength fragmentation, optimization, linear programming, heuristic

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1847
27 Effect of Ply Orientation on Roughness for the Trimming Process of CFRP Laminates

Authors: Jean François Chatelain, Imed Zaghbani, Joseph Monier

Abstract:

The machining of Carbon Fiber Reinforced Plastics has come to constitute a significant challenge for many fields of industry. The resulting surface finish of machined parts is of primary concern for several reasons, including contact quality and impact on the assembly. Therefore, the characterization and prediction of roughness based on machining parameters are crucial for costeffective operations. In this study, a PCD tool comprised of two straight flutes was used to trim 32-ply carbon fiber laminates in a bid to analyze the effects of the feed rate and the cutting speed on the surface roughness. The results show that while the speed has but a slight impact on the surface finish, the feed rate for its part affects it strongly. A detailed study was also conducted on the effect of fiber orientation on surface roughness, for quasi-isotropic laminates used in aerospace. The resulting roughness profiles for the four-ply orientation lay-up were compared, and it was found that fiber angle is a critical parameter relating to surface roughness. One of the four orientations studied led to very poor surface finishes, and characteristic roughness profiles were identified and found to only relate to the ply orientations of multilayer carbon fiber laminates.

Keywords: Roughness, Detouring, Composites, Aerospace

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2652
26 Development of Tools for Multi Vehicles Simulation with Robot Operating System and ArduPilot

Authors: Pierre Kancir, Jean-Philippe Diguet, Marc Sevaux

Abstract:

One of the main difficulties in developing multi-robot systems (MRS) is related to the simulation and testing tools available. Indeed, if the differences between simulations and real robots are too significant, the transition from the simulation to the robot won’t be possible without another long development phase and won’t permit to validate the simulation. Moreover, the testing of different algorithmic solutions or modifications of robots requires a strong knowledge of current tools and a significant development time. Therefore, the availability of tools for MRS, mainly with flying drones, is crucial to enable the industrial emergence of these systems. This research aims to present the most commonly used tools for MRS simulations and their main shortcomings and presents complementary tools to improve the productivity of designers in the development of multi-vehicle solutions focused on a fast learning curve and rapid transition from simulations to real usage. The proposed contributions are based on existing open source tools as Gazebo simulator combined with ROS (Robot Operating System) and the open-source multi-platform autopilot ArduPilot to bring them to a broad audience.

Keywords: ROS, ArduPilot, MRS, simulation, drones, Gazebo.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 879
25 In situ Real-Time Multivariate Analysis of Methanolysis Monitoring of Sunflower Oil Using FTIR

Authors: Pascal Mwenge, Tumisang Seodigeng

Abstract:

The combination of world population and the third industrial revolution led to high demand for fuels. On the other hand, the decrease of global fossil 8fuels deposits and the environmental air pollution caused by these fuels has compounded the challenges the world faces due to its need for energy. Therefore, new forms of environmentally friendly and renewable fuels such as biodiesel are needed. The primary analytical techniques for methanolysis yield monitoring have been chromatography and spectroscopy, these methods have been proven reliable but are more demanding, costly and do not provide real-time monitoring. In this work, the in situ monitoring of biodiesel from sunflower oil using FTIR (Fourier Transform Infrared) has been studied; the study was performed using EasyMax Mettler Toledo reactor equipped with a DiComp (Diamond) probe. The quantitative monitoring of methanolysis was performed by building a quantitative model with multivariate calibration using iC Quant module from iC IR 7.0 software. 15 samples of known concentrations were used for the modelling which were taken in duplicate for model calibration and cross-validation, data were pre-processed using mean centering and variance scale, spectrum math square root and solvent subtraction. These pre-processing methods improved the performance indexes from 7.98 to 0.0096, 11.2 to 3.41, 6.32 to 2.72, 0.9416 to 0.9999, RMSEC, RMSECV, RMSEP and R2Cum, respectively. The R2 value of 1 (training), 0.9918 (test), 0.9946 (cross-validation) indicated the fitness of the model built. The model was tested against univariate model; small discrepancies were observed at low concentration due to unmodelled intermediates but were quite close at concentrations above 18%. The software eliminated the complexity of the Partial Least Square (PLS) chemometrics. It was concluded that the model obtained could be used to monitor methanol of sunflower oil at industrial and lab scale.

Keywords: Biodiesel, calibration, chemometrics, FTIR, methanolysis, multivariate analysis, transesterification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 877
24 Genetic Algorithm Optimization of the Economical, Ecological and Self-Consumption Impact of the Energy Production of a Single Building

Authors: Ludovic Favre, Thibaut M. Schafer, Jean-Luc Robyr, Elena-Lavinia Niederhäuser

Abstract:

This paper presents an optimization method based on genetic algorithm for the energy management inside buildings developed in the frame of the project Smart Living Lab (SLL) in Fribourg (Switzerland). This algorithm optimizes the interaction between renewable energy production, storage systems and energy consumers. In comparison with standard algorithms, the innovative aspect of this project is the extension of the smart regulation over three simultaneous criteria: the energy self-consumption, the decrease of greenhouse gas emissions and operating costs. The genetic algorithm approach was chosen due to the large quantity of optimization variables and the non-linearity of the optimization function. The optimization process includes also real time data of the building as well as weather forecast and users habits. This information is used by a physical model of the building energy resources to predict the future energy production and needs, to select the best energetic strategy, to combine production or storage of energy in order to guarantee the demand of electrical and thermal energy. The principle of operation of the algorithm as well as typical output example of the algorithm is presented.

Keywords: Building’s energy, control system, energy management, modelling, genetic optimization algorithm, renewable energy, greenhouse gases, energy storage.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 737
23 Effect of Support Distance on Damage of Drilled Thin CFRP Laminates

Authors: Jean François Chatelain, Imed Zaghbani, Gilbert Lebrun, Kaml Hasni

Abstract:

Severe damages may occur during the drilling of carbon fiber reinforced plastics (CFRP). In practice, this damage is limited by adding a backup support to the drilled parts. For some aeronautical parts with curvatures, backing up parts is a demanding process. In order to simplify the operation, this research studies the effect of using a configurable setup to support parts on the resulting quality of drilled holes. The test coupons referenced in this study are twenty four-plies unidirectional laminates made of carbon fibers and epoxy resin. Different signals were measured during the drilling process for these laminates, including the thrust force, the displacement and the acceleration. The processing of these signals demonstrated that the damage is due to the combination of two main factors: the spring-back of the thin part and the thrust force. The results found were confirmed for different feeds and speeds. When the distance between supports is increased, it is observed that the spring-back increases but the thrust force decreases. The study proves the feasibility of unsupported drilling of thin CFRP laminates without creating any observable damage.

Keywords: CFRP, Damage, Drilling, Flexible setup.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1764
22 Performance Analysis in 5th Generation Massive Multiple-Input-Multiple-Output Systems

Authors: Jihad S. Daba, Jean-Pierre Dubois, Georges El Soury

Abstract:

Fifth generation wireless networks guarantee significant capacity enhancement to suit more clients and services at higher information rates with better reliability while consuming less power. The deployment of massive multiple-input-multiple-output technology guarantees broadband wireless networks with the use of base station antenna arrays to serve a large number of users on the same frequency and time-slot channels. In this work, we evaluate the performance of massive multiple-input-multiple-output systems (MIMO) systems in 5th generation cellular networks in terms of capacity and bit error rate. Several cases were considered and analyzed to compare the performance of massive MIMO systems while varying the number of antennas at both transmitting and receiving ends. We found that, unlike classical MIMO systems, reducing the number of transmit antennas while increasing the number of antennas at the receiver end provides a better solution to performance enhancement. In addition, enhanced orthogonal frequency division multiplexing and beam division multiple access schemes further improve the performance of massive MIMO systems and make them more reliable.

Keywords: Beam division multiple access, D2D communication, enhanced OFDM, fifth generation broadband, massive MIMO.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 712
21 Idealization of Licca-chan and Barbie: Comparison of Two Dolls across the Pacific

Authors: Miho Tsukamoto

Abstract:

Since the initial creation of the Barbie doll in 1959, it became a symbol of US society. Likewise, the Licca-chan, a Japanese doll created in 1967, also became a Japanese symbolic doll of Japanese society. Prior to the introduction of Licca-chan, Barbie was already marketed in Japan but their sales were dismal. Licca-chan (an actual name: Kayama Licca) is a plastic doll with a variety of sizes ranging from 21.0 cm to 29.0 cm which many Japanese girls dream of having. For over 35 years, the manufacturer, Takara Co., Ltd. has sold over 48 million dolls and has produced doll houses, accessories, clothes, and Licca-chan video games for the Nintendo DS. Many First-generation Licca-chan consumers still are enamored with Licca-chan, and go to Licca-chan House, in an amusement park with their daughters. These people are called Licca-chan maniacs, as they enjoy touring the Licca-chan’s factory in Tohoku or purchase various Licca-chan accessories. After the successful launch of Licca-chan into the Japanese market, a mixed-like doll from the US and Japan, a doll, JeNny, was later sold in the same Japanese market by Takara Co., Ltd. in 1982. Comparison of these cultural iconic dolls, Barbie and Licca-chan, are analyzed in this paper. In fact, these dolls have concepts of girls’ dreams. By using concepts of mythology of Jean Baudrillard, these dolls can be represented idealized images of figures in the products for consumers, but at the same time, consumers can see products with different perspectives, which can cause controversy.

Keywords: Barbie, Dolls, JeNny, Idealization, Licca-chan.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3530
20 Optimization of Kinematics for Birds and UAVs Using Evolutionary Algorithms

Authors: Mohamed Hamdaoui, Jean-Baptiste Mouret, Stephane Doncieux, Pierre Sagaut

Abstract:

The aim of this work is to present a multi-objective optimization method to find maximum efficiency kinematics for a flapping wing unmanned aerial vehicle. We restrained our study to rectangular wings with the same profile along the span and to harmonic dihedral motion. It is assumed that the birdlike aerial vehicle (whose span and surface area were fixed respectively to 1m and 0.15m2) is in horizontal mechanically balanced motion at fixed speed. We used two flight physics models to describe the vehicle aerodynamic performances, namely DeLaurier-s model, which has been used in many studies dealing with flapping wings, and the model proposed by Dae-Kwan et al. Then, a constrained multi-objective optimization of the propulsive efficiency is performed using a recent evolutionary multi-objective algorithm called є-MOEA. Firstly, we show that feasible solutions (i.e. solutions that fulfil the imposed constraints) can be obtained using Dae-Kwan et al.-s model. Secondly, we highlight that a single objective optimization approach (weighted sum method for example) can also give optimal solutions as good as the multi-objective one which nevertheless offers the advantage of directly generating the set of the best trade-offs. Finally, we show that the DeLaurier-s model does not yield feasible solutions.

Keywords: Flight physics, evolutionary algorithm, optimization, Pareto surface.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1616
19 FPGA Implementation of Generalized Maximal Ratio Combining Receiver Diversity

Authors: Rafic Ayoubi, Jean-Pierre Dubois, Rania Minkara

Abstract:

In this paper, we study FPGA implementation of a novel supra-optimal receiver diversity combining technique, generalized maximal ratio combining (GMRC), for wireless transmission over fading channels in SIMO systems. Prior published results using ML-detected GMRC diversity signal driven by BPSK showed superior bit error rate performance to the widely used MRC combining scheme in an imperfect channel estimation (ICE) environment. Under perfect channel estimation conditions, the performance of GMRC and MRC were identical. The main drawback of the GMRC study was that it was theoretical, thus successful FPGA implementation of it using pipeline techniques is needed as a wireless communication test-bed for practical real-life situations. Simulation results showed that the hardware implementation was efficient both in terms of speed and area. Since diversity combining is especially effective in small femto- and picocells, internet-associated wireless peripheral systems are to benefit most from GMRC. As a result, many spinoff applications can be made to the hardware of IP-based 4th generation networks.

Keywords: Femto-internet cells, field-programmable gate array, generalized maximal-ratio combining, Lyapunov fractal dimension, pipelining technique, wireless SIMO channels.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2569
18 An Enhanced SAR-Based Tsunami Detection System

Authors: Jean-Pierre Dubois, Jihad S. Daba, H. Karam, J. Abdallah

Abstract:

Tsunami early detection and warning systems have proved to be of ultimate importance, especially after the destructive tsunami that hit Japan in March 2012. Such systems are crucial to inform the authorities of any risk of a tsunami and of the degree of its danger in order to make the right decision and notify the public of the actions they need to take to save their lives. The purpose of this research is to enhance existing tsunami detection and warning systems. We first propose an automated and miniaturized model of an early tsunami detection and warning system. The model for the operation of a tsunami warning system is simulated using the data acquisition toolbox of Matlab and measurements acquired from specified internet pages due to the lack of the required real-life sensors, both seismic and hydrologic, and building a graphical user interface for the system. In the second phase of this work, we implement various satellite image filtering schemes to enhance the acquired synthetic aperture radar images of the tsunami affected region that are masked by speckle noise. This enables us to conduct a post-tsunami damage extent study and calculate the percentage damage. We conclude by proposing improvements to the existing telecommunication infrastructure of existing warning tsunami systems using a migration to IP-based networks and fiber optics links.

Keywords: Detection, GIS, GSN, GTS, GPS, speckle noise, synthetic aperture radar, tsunami, wiener filter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2132
17 An Ontological Approach to Existentialist Theatre and Theatre of the Absurd in the Works of Jean-Paul Sartre and Samuel Beckett

Authors: Gülten Silindir Keretli

Abstract:

The aim of this study is to analyse the works of playwrights within the framework of existential philosophy. It is to observe the ontological existence in the plays of No Exit and Endgame. Literary works will be discussed separately in each section of this study. The despair of post-war generation of Europe problematized the ‘human condition’ in every field of literature which is the very product of social upheaval. With this concern in his mind, Sartre’s creative works portrayed man as a lonely being, burdened with terrifying freedom to choose and create his own meaning in an apparently meaningless world. The traces of the existential thought are to be found throughout the history of philosophy and literature. On the other hand, the theatre of the absurd is a form of drama showing the absurdity of the human condition and it is heavily influenced by the existential philosophy. Beckett is the most influential playwright of the theatre of the absurd. The themes and thoughts in his plays share many tenets of the existential philosophy. The existential philosophy posits the meaninglessness of existence and it regards man as being thrown into the universe and into desolate isolation. To overcome loneliness and isolation, the human ego needs recognition from the other people. Sartre calls this need of recognition as the need for ‘the Look’ (Le regard) from the Other. In this paper, existentialist philosophy and existentialist angst will be elaborated and then the works of existentialist theatre and theatre of absurd will be discussed within the framework of existential philosophy.

Keywords: Consciousness, existentialism, the notion of absurd, the other.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1540
16 Material Density Mapping on Deformable 3D Models of Human Organs

Authors: Petru Manescu, Joseph Azencot, Michael Beuve, Hamid Ladjal, Jacques Saade, Jean-Michel Morreau, Philippe Giraud, Behzad Shariat

Abstract:

Organ motion, especially respiratory motion, is a technical challenge to radiation therapy planning and dosimetry. This motion induces displacements and deformation of the organ tissues within the irradiated region which need to be taken into account when simulating dose distribution during treatment. Finite element modeling (FEM) can provide a great insight into the mechanical behavior of the organs, since they are based on the biomechanical material properties, complex geometry of organs, and anatomical boundary conditions. In this paper we present an original approach that offers the possibility to combine image-based biomechanical models with particle transport simulations. We propose a new method to map material density information issued from CT images to deformable tetrahedral meshes. Based on the principle of mass conservation our method can correlate density variation of organ tissues with geometrical deformations during the different phases of the respiratory cycle. The first results are particularly encouraging, as local error quantification of density mapping on organ geometry and density variation with organ motion are performed to evaluate and validate our approach.

Keywords: Biomechanical simulation, dose distribution, image guided radiation therapy, organ motion, tetrahedral mesh, 4D-CT.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2975
15 Analysing Environmental Risks and Perceptions of Risks to Assess Health and Well-being in Poor Areas of Abidjan

Authors: Kouassi Dongo, Christian Zurbrügg, Gueladio Cissé1, Brigit Obrist, Marcel Tanner, Jean Biémi

Abstract:

This study analyzed environmental health risks and people-s perceptions of risks related to waste management in poor settlements of Abidjan, to develop integrated solutions for health and well-being improvement. The trans-disciplinary approach used relied on remote sensing, a geographic information system (GIS), qualitative and quantitative methods such as interviews and a household survey (n=1800). Mitigating strategies were then developed using an integrated participatory stakeholder workshop. Waste management deficiencies resulting in lack of drainage and uncontrolled solid and liquid waste disposal in the poor settlements lead to severe environmental health risks. Health problems were caused by direct handling of waste, as well as through broader exposure of the population. People in poor settlements had little awareness of health risks related to waste management in their community and a general lack of knowledge pertaining to sanitation systems. This unfortunate combination was the key determinant affecting the health and vulnerability. For example, an increased prevalence of malaria (47.1%) and diarrhoea (19.2%) was observed in the rainy season when compared to the dry season (32.3% and 14.3%). Concerted and adapted solutions that suited all the stakeholders concerned were developed in a participatory workshop to allow for improvement of health and well-being.

Keywords: Abidjan, environmental health risks, informalsettlements, vulnerability, waste management.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1732
14 Trimmed Mean as an Adaptive Robust Estimator of a Location Parameter for Weibull Distribution

Authors: Carolina B. Baguio

Abstract:

One of the purposes of the robust method of estimation is to reduce the influence of outliers in the data, on the estimates. The outliers arise from gross errors or contamination from distributions with long tails. The trimmed mean is a robust estimate. This means that it is not sensitive to violation of distributional assumptions of the data. It is called an adaptive estimate when the trimming proportion is determined from the data rather than being fixed a “priori-. The main objective of this study is to find out the robustness properties of the adaptive trimmed means in terms of efficiency, high breakdown point and influence function. Specifically, it seeks to find out the magnitude of the trimming proportion of the adaptive trimmed mean which will yield efficient and robust estimates of the parameter for data which follow a modified Weibull distribution with parameter λ = 1/2 , where the trimming proportion is determined by a ratio of two trimmed means defined as the tail length. Secondly, the asymptotic properties of the tail length and the trimmed means are also investigated. Finally, a comparison is made on the efficiency of the adaptive trimmed means in terms of the standard deviation for the trimming proportions and when these were fixed a “priori". The asymptotic tail lengths defined as the ratio of two trimmed means and the asymptotic variances were computed by using the formulas derived. While the values of the standard deviations for the derived tail lengths for data of size 40 simulated from a Weibull distribution were computed for 100 iterations using a computer program written in Pascal language. The findings of the study revealed that the tail lengths of the Weibull distribution increase in magnitudes as the trimming proportions increase, the measure of the tail length and the adaptive trimmed mean are asymptotically independent as the number of observations n becomes very large or approaching infinity, the tail length is asymptotically distributed as the ratio of two independent normal random variables, and the asymptotic variances decrease as the trimming proportions increase. The simulation study revealed empirically that the standard error of the adaptive trimmed mean using the ratio of tail lengths is relatively smaller for different values of trimming proportions than its counterpart when the trimming proportions were fixed a 'priori'.

Keywords: Adaptive robust estimate, asymptotic efficiency, breakdown point, influence function, L-estimates, location parameter, tail length, Weibull distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2046
13 A New Multi-Target, Multi-Agent Search-and-Rescue Path Planning Approach

Authors: Jean Berger, Nassirou Lo, Martin Noel

Abstract:

Perfectly suited for natural or man-made emergency and disaster management situations such as flood, earthquakes, tornadoes, or tsunami, multi-target search path planning for a team of rescue agents is known to be computationally hard, and most techniques developed so far come short to successfully estimate optimality gap. A novel mixed-integer linear programming (MIP) formulation is proposed to optimally solve the multi-target multi-agent discrete search and rescue (SAR) path planning problem. Aimed at maximizing cumulative probability of successful target detection, it captures anticipated feedback information associated with possible observation outcomes resulting from projected path execution, while modeling agent discrete actions over all possible moving directions. Problem modeling further takes advantage of network representation to encompass decision variables, expedite compact constraint specification, and lead to substantial problem-solving speed-up. The proposed MIP approach uses CPLEX optimization machinery, efficiently computing near-optimal solutions for practical size problems, while giving a robust upper bound obtained from Lagrangean integrality constraint relaxation. Should eventually a target be positively detected during plan execution, a new problem instance would simply be reformulated from the current state, and then solved over the next decision cycle. A computational experiment shows the feasibility and the value of the proposed approach.

Keywords: Search path planning, search and rescue, multi-agent, mixed-integer linear programming, optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2444
12 Liquid Chromatography Microfluidics for Detection and Quantification of Urine Albumin Using Linear Regression Method

Authors: Patricia B. Cruz, Catrina Jean G. Valenzuela, Analyn N. Yumang

Abstract:

Nearly a hundred per million of the Filipino population is diagnosed with Chronic Kidney Disease (CKD). The early stage of CKD has no symptoms and can only be discovered once the patient undergoes urinalysis. Over the years, different methods were discovered and used for the quantification of the urinary albumin such as the immunochemical assays where most of these methods require large machinery that has a high cost in maintenance and resources, and a dipstick test which is yet to be proven and is still debated as a reliable method in detecting early stages of microalbuminuria. This research study involves the use of the liquid chromatography concept in microfluidic instruments with biosensor as a means of separation and detection respectively, and linear regression to quantify human urinary albumin. The researchers’ main objective was to create a miniature system that quantifies and detect patients’ urinary albumin while reducing the amount of volume used per five test samples. For this study, 30 urine samples of unknown albumin concentrations were tested using VITROS Analyzer and the microfluidic system for comparison. Based on the data shared by both methods, the actual vs. predicted regression were able to create a positive linear relationship with an R2 of 0.9995 and a linear equation of y = 1.09x + 0.07, indicating that the predicted values and actual values are approximately equal. Furthermore, the microfluidic instrument uses 75% less in total volume – sample and reagents combined, compared to the VITROS Analyzer per five test samples.

Keywords: Chronic kidney disease, microfluidics, linear regression, VITROS analyzer, urinary albumin.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 811
11 Advanced Stochastic Models for Partially Developed Speckle

Authors: Jihad S. Daba (Jean-Pierre Dubois), Philip Jreije

Abstract:

Speckled images arise when coherent microwave, optical, and acoustic imaging techniques are used to image an object, surface or scene. Examples of coherent imaging systems include synthetic aperture radar, laser imaging systems, imaging sonar systems, and medical ultrasound systems. Speckle noise is a form of object or target induced noise that results when the surface of the object is Rayleigh rough compared to the wavelength of the illuminating radiation. Detection and estimation in images corrupted by speckle noise is complicated by the nature of the noise and is not as straightforward as detection and estimation in additive noise. In this work, we derive stochastic models for speckle noise, with an emphasis on speckle as it arises in medical ultrasound images. The motivation for this work is the problem of segmentation and tissue classification using ultrasound imaging. Modeling of speckle in this context involves partially developed speckle model where an underlying Poisson point process modulates a Gram-Charlier series of Laguerre weighted exponential functions, resulting in a doubly stochastic filtered Poisson point process. The statistical distribution of partially developed speckle is derived in a closed canonical form. It is observed that as the mean number of scatterers in a resolution cell is increased, the probability density function approaches an exponential distribution. This is consistent with fully developed speckle noise as demonstrated by the Central Limit theorem.

Keywords: Doubly stochastic filtered process, Poisson point process, segmentation, speckle, ultrasound

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1710
10 Tokyo Skyscrapers: Technologically Advanced Structures in Seismic Areas

Authors: J. Szolomicki, H. Golasz-Szolomicka

Abstract:

The architectural and structural analysis of selected high-rise buildings in Tokyo is presented in this paper. The capital of Japan is the most densely populated city in the world and moreover is located in one of the most active seismic zones. The combination of these factors has resulted in the creation of sophisticated designs and innovative engineering solutions, especially in the field of design and construction of high-rise buildings. The foreign architectural studios (as, for Jean Nouvel, Kohn Pedesen Associates, Skidmore, Owings & Merill) which specialize in the designing of skyscrapers, played a major role in the development of technological ideas and architectural forms for such extraordinary engineering structures. Among the projects completed by them, there are examples of high-rise buildings that set precedents for future development. An essential aspect which influences the design of high-rise buildings is the necessity to take into consideration their dynamic reaction to earthquakes and counteracting wind vortices. The need to control motions of these buildings, induced by the force coming from earthquakes and wind, led to the development of various methods and devices for dissipating energy which occur during such phenomena. Currently, Japan is a global leader in seismic technologies which safeguard seismic influence on high-rise structures. Due to these achievements the most modern skyscrapers in Tokyo are able to withstand earthquakes with a magnitude of over seven degrees at the Richter scale. Damping devices applied are of a passive, which do not require additional power supply or active one which suppresses the reaction with the input of extra energy. In recent years also hybrid dampers were used, with an additional active element to improve the efficiency of passive damping.

Keywords: Core structure, damping systems, high-rise buildings.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 951
9 Sperm Whale Signal Analysis: Comparison using the Auto Regressive model and the Daubechies 15 Wavelets Transform

Authors: Olivier Adam, Maciej Lopatka, Christophe Laplanche, Jean-François Motsch

Abstract:

This article presents the results using a parametric approach and a Wavelet Transform in analysing signals emitting from the sperm whale. The extraction of intrinsic characteristics of these unique signals emitted by marine mammals is still at present a difficult exercise for various reasons: firstly, it concerns non-stationary signals, and secondly, these signals are obstructed by interfering background noise. In this article, we compare the advantages and disadvantages of both methods: Auto Regressive models and Wavelet Transform. These approaches serve as an alternative to the commonly used estimators which are based on the Fourier Transform for which the hypotheses necessary for its application are in certain cases, not sufficiently proven. These modern approaches provide effective results particularly for the periodic tracking of the signal's characteristics and notably when the signal-to-noise ratio negatively effects signal tracking. Our objectives are twofold. Our first goal is to identify the animal through its acoustic signature. This includes recognition of the marine mammal species and ultimately of the individual animal (within the species). The second is much more ambitious and directly involves the intervention of cetologists to study the sounds emitted by marine mammals in an effort to characterize their behaviour. We are working on an approach based on the recordings of marine mammal signals and the findings from this data result from the Wavelet Transform. This article will explore the reasons for using this approach. In addition, thanks to the use of new processors, these algorithms once heavy in calculation time can be integrated in a real-time system.

Keywords: Autoregressive model, Daubechies Wavelet, Fourier Transform, marine mammals, signal processing, spectrogram, sperm whale, Wavelet Transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1972
8 Influence of Thermal Damage on the Mechanical Strength of Trimmed CFRP

Authors: Guillaume Mullier, Jean François Chatelain

Abstract:

Carbon Fiber Reinforced Plastics (CFRPs) are widely used for advanced applications, in particular in aerospace, automotive and wind energy industries. Once cured to near net shape, CFRP parts need several finishing operations such as trimming, milling or drilling in order to accommodate fastening hardware and meeting the final dimensions. The present research aims to study the effect of the cutting temperature in trimming on the mechanical strength of high performance CFRP laminates used for aeronautics applications. The cutting temperature is of great importance when dealing with trimming of CFRP. Temperatures higher than the glass-transition temperature (Tg) of the resin matrix are highly undesirable: they cause degradation of the matrix in the trimmed edges area, which can severely affect the mechanical performance of the entire component. In this study, a 9.50mm diameter CVD diamond coated carbide tool with six flutes was used to trim 24-plies CFRP laminates. A 300m/min cutting speed and 1140mm/min feed rate were used in the experiments. The tool was heated prior to trimming using a blowtorch, for temperatures ranging from 20°C to 300°C. The temperature at the cutting edge was measured using embedded KType thermocouples. Samples trimmed for different cutting temperatures, below and above Tg, were mechanically tested using three-points bending short-beam loading configurations. New cutting tools as well as worn cutting tools were utilized for the experiments. The experiments with the new tools could not prove any correlation between the length of cut, the cutting temperature and the mechanical performance. Thus mechanical strength was constant, regardless of the cutting temperature. However, for worn tools, producing a cutting temperature rising up to 450°C, thermal damage of the resin was observed. The mechanical tests showed a reduced mean resistance in short beam configuration, while the resistance in three point bending decreases with increase of the cutting temperature.

Keywords: Composites, Trimming, Thermal Damage, Surface Quality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1758
7 Genetic Algorithm for In-Theatre Military Logistics Search-and-Delivery Path Planning

Authors: Jean Berger, Mohamed Barkaoui

Abstract:

Discrete search path planning in time-constrained uncertain environment relying upon imperfect sensors is known to be hard, and current problem-solving techniques proposed so far to compute near real-time efficient path plans are mainly bounded to provide a few move solutions. A new information-theoretic –based open-loop decision model explicitly incorporating false alarm sensor readings, to solve a single agent military logistics search-and-delivery path planning problem with anticipated feedback is presented. The decision model consists in minimizing expected entropy considering anticipated possible observation outcomes over a given time horizon. The model captures uncertainty associated with observation events for all possible scenarios. Entropy represents a measure of uncertainty about the searched target location. Feedback information resulting from possible sensor observations outcomes along the projected path plan is exploited to update anticipated unit target occupancy beliefs. For the first time, a compact belief update formulation is generalized to explicitly include false positive observation events that may occur during plan execution. A novel genetic algorithm is then proposed to efficiently solve search path planning, providing near-optimal solutions for practical realistic problem instances. Given the run-time performance of the algorithm, natural extension to a closed-loop environment to progressively integrate real visit outcomes on a rolling time horizon can be easily envisioned. Computational results show the value of the approach in comparison to alternate heuristics.

Keywords: Search path planning, false alarm, search-and-delivery, entropy, genetic algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1934
6 Synthesis of Silver Nanoparticles by Chemical Reduction Method and Their Antibacterial Activity

Authors: Maribel G. Guzmán, Jean Dille, Stephan Godet

Abstract:

Silver nanoparticles were prepared by chemical reduction method. Silver nitrate was taken as the metal precursor and hydrazine hydrate as a reducing agent. The formation of the silver nanoparticles was monitored using UV-Vis absorption spectroscopy. The UV-Vis spectroscopy revealed the formation of silver nanopart├¡cles by exhibing the typical surface plasmon absorption maxima at 418-420 nm from the UV–Vis spectrum. Comparison of theoretical (Mie light scattering theory) and experimental results showed that diameter of silver nanoparticles in colloidal solution is about 60 nm. We have used energy-dispersive spectroscopy (EDX), X-ray diffraction (XRD), transmission electron microscopy (TEM) and, UV–Vis spectroscopy to characterize the nanoparticles obtained. The energy-dispersive spectroscopy (EDX) of the nanoparticles dispersion confirmed the presence of elemental silver signal no peaks of other impurity were detected. The average size and morphology of silver nanoparticles were determined by transmission electron microscopy (TEM). TEM photographs indicate that the nanopowders consist of well dispersed agglomerates of grains with a narrow size distribution (40 and 60 nm), whereas the radius of the individual particles are between 10 and 20 nm. The synthesized nanoparticles have been structurally characterized by X-ray diffraction and transmission high-energy electron diffraction (HEED). The peaks in the XRD pattern are in good agreement with the standard values of the face-centered-cubic form of metallic silver (ICCD-JCPDS card no. 4-0787) and no peaks of other impurity crystalline phases were detected. Additionally, the antibacterial activity of the nanopart├¡culas dispersion was measured by Kirby-Bauer method. The nanoparticles of silver showed high antimicrobial and bactericidal activity against gram positive bacteria such as Escherichia Coli, Pseudimonas aureginosa and staphylococcus aureus which is a highly methicillin resistant strain.

Keywords: Silver nanoparticles, surface plasmon, UV-Vis absorption spectrum, chemicals reduction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12997
5 Laboratory Analysis of Stormwater Runoff Hydraulic and Pollutant Removal Performance of Pervious Concrete Based on Seashell By-Products

Authors: Jean-Jacques Randrianarimanana, Nassim Sebaibi, Mohamed Boutouil

Abstract:

In order to solve problems associated with stormwater runoff in urban areas and their effects on natural and artificial water bodies, the integration of new technical solutions to the rainwater drainage becomes even more essential. Permeable pavement systems are one of the most widely used techniques. This paper presents a laboratory analysis of stormwater runoff hydraulic and pollutant removal performance of permeable pavement system using pervious pavements based on seashell products. The laboratory prototype is a square column of 25 cm of side and consists of the surface in pervious concrete, a bedding of 3 cm in height, a geotextile and a subbase layer of 50 cm in height. A series of constant simulated rain events using semi-synthetic runoff which varied in intensity and duration were carried out. The initial vertical saturated hydraulic conductivity of the entire pervious pavement system was 0.25 cm/s (148 L/m2/min). The hydraulic functioning was influenced by both the inlet flow rate value and the test duration. The total water losses including evaporation ranged between 9% to 20% for all hydraulic experiments. The temporal and vertical variability of the pollutant removal efficiency (PRE) of the system were studied for total suspended solids (TSS). The results showed that the PRE along the vertical profile was influenced by the size of the suspended solids, and the pervious paver has the highest capacity to trap pollutant than the other porous layers of the permeable pavement system after the geotextile. The TSS removal efficiency was about 80% for the entire system. The first-flush effect of TSS was observed, but it appeared only at the beginning (2 to 6 min) of the experiments. It has been shown that the PPS can capture first-flush. The project in which this study is integrated aims to contribute to both the valorization of shellfish waste and the sustainable management of rainwater.

Keywords: Hydraulic, pervious concrete, pollutant removal efficiency, seashell by-products, stormwater runoff.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 915
4 An Efficient Motion Recognition System Based on LMA Technique and a Discrete Hidden Markov Model

Authors: Insaf Ajili, Malik Mallem, Jean-Yves Didier

Abstract:

Human motion recognition has been extensively increased in recent years due to its importance in a wide range of applications, such as human-computer interaction, intelligent surveillance, augmented reality, content-based video compression and retrieval, etc. However, it is still regarded as a challenging task especially in realistic scenarios. It can be seen as a general machine learning problem which requires an effective human motion representation and an efficient learning method. In this work, we introduce a descriptor based on Laban Movement Analysis technique, a formal and universal language for human movement, to capture both quantitative and qualitative aspects of movement. We use Discrete Hidden Markov Model (DHMM) for training and classification motions. We improve the classification algorithm by proposing two DHMMs for each motion class to process the motion sequence in two different directions, forward and backward. Such modification allows avoiding the misclassification that can happen when recognizing similar motions. Two experiments are conducted. In the first one, we evaluate our method on a public dataset, the Microsoft Research Cambridge-12 Kinect gesture data set (MSRC-12) which is a widely used dataset for evaluating action/gesture recognition methods. In the second experiment, we build a dataset composed of 10 gestures(Introduce yourself, waving, Dance, move, turn left, turn right, stop, sit down, increase velocity, decrease velocity) performed by 20 persons. The evaluation of the system includes testing the efficiency of our descriptor vector based on LMA with basic DHMM method and comparing the recognition results of the modified DHMM with the original one. Experiment results demonstrate that our method outperforms most of existing methods that used the MSRC-12 dataset, and a near perfect classification rate in our dataset.

Keywords: Human Motion Recognition, Motion representation, Laban Movement Analysis, Discrete Hidden Markov Model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 695
3 Multiphase Flow Regime Detection Algorithm for Gas-Liquid Interface Using Ultrasonic Pulse-Echo Technique

Authors: Serkan Solmaz, Jean-Baptiste Gouriet, Nicolas Van de Wyer, Christophe Schram

Abstract:

Efficiency of the cooling process for cryogenic propellant boiling in engine cooling channels on space applications is relentlessly affected by the phase change occurs during the boiling. The effectiveness of the cooling process strongly pertains to the type of the boiling regime such as nucleate and film. Geometric constraints like a non-transparent cooling channel unable to use any of visualization methods. The ultrasonic (US) technique as a non-destructive method (NDT) has therefore been applied almost in every engineering field for different purposes. Basically, the discontinuities emerge between mediums like boundaries among different phases. The sound wave emitted by the US transducer is both transmitted and reflected through a gas-liquid interface which makes able to detect different phases. Due to the thermal and structural concerns, it is impractical to sustain a direct contact between the US transducer and working fluid. Hence the transducer should be located outside of the cooling channel which results in additional interfaces and creates ambiguities on the applicability of the present method. In this work, an exploratory research is prompted so as to determine detection ability and applicability of the US technique on the cryogenic boiling process for a cooling cycle where the US transducer is taken place outside of the channel. Boiling of the cryogenics is a complex phenomenon which mainly brings several hindrances for experimental protocol because of thermal properties. Thus substitute materials are purposefully selected based on such parameters to simplify experiments. Aside from that, nucleate and film boiling regimes emerging during the boiling process are simply simulated using non-deformable stainless steel balls, air-bubble injection apparatuses and air clearances instead of conducting a real-time boiling process. A versatile detection algorithm is perennially developed concerning exploratory studies afterward. According to the algorithm developed, the phases can be distinguished 99% as no-phase, air-bubble, and air-film presences. The results show the detection ability and applicability of the US technique for an exploratory purpose.

Keywords: Ultrasound, ultrasonic, multiphase flow, boiling, cryogenics, detection algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 950
2 Influence of Crystal Orientation on Electromechanical Behaviors of Relaxor Ferroelectric P(VDF-TrFE-CTFE) Terpolymer

Authors: Qing Liu, Jean-Fabien Capsal, Claude Richard

Abstract:

In this current contribution, authors are dedicated to investigate influence of the crystal lamellae orientation on electromechanical behaviors of relaxor ferroelectric Poly (vinylidene fluoride –trifluoroethylene -chlorotrifluoroethylene) (P(VDF-TrFE-CTFE)) films by control of polymer microstructure, aiming to picture the full map of structure-property relationship. In order to define their crystal orientation films, terpolymer films were fabricated by solution-casting, stretching and hot-pressing process. Differential scanning calorimetry, impedance analyzer, and tensile strength techniques were employed to characterize crystallographic parameters, dielectric permittivity, and elastic Young’s modulus respectively. In addition, large electrical induced out-of-plane electrostrictive strain was obtained by cantilever beam mode. Consequently, as-casted pristine films exhibited surprisingly high electrostrictive strain 0.1774% due to considerably small value of elastic Young’s modulus although relatively low dielectric permittivity. Such reasons contributed to large mechanical elastic energy density. Instead, due to 2 folds increase of elastic Young’s modulus and less than 50% augmentation of dielectric constant, fullycrystallized film showed weak electrostrictive behavior and mechanical energy density as well. And subjected to mechanical stretching process, Film C exhibited stronger dielectric constant and out-performed electrostrictive strain over Film B because edge-on crystal lamellae orientation induced by uniaxially mechanical stretch. Hot-press films were compared in term of cooling rate. Rather large electrostrictive strain of 0.2788% for hot-pressed Film D in quenching process was observed although its dielectric permittivity equivalent to that of pristine as-casted Film A, showing highest mechanical elastic energy density value of 359.5 J/m3. In hot-press cooling process, dielectric permittivity of Film E saw values at 48.8 concomitant with ca.100% increase of Young’s modulus. Films with intermediate mechanical energy density were obtained.

Keywords: Crystal orientation, electrostrictive strain, mechanical energy density, permittivity, relaxor ferroelectric.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1635
1 Towards End-To-End Disease Prediction from Raw Metagenomic Data

Authors: Maxence Queyrel, Edi Prifti, Alexandre Templier, Jean-Daniel Zucker

Abstract:

Analysis of the human microbiome using metagenomic sequencing data has demonstrated high ability in discriminating various human diseases. Raw metagenomic sequencing data require multiple complex and computationally heavy bioinformatics steps prior to data analysis. Such data contain millions of short sequences read from the fragmented DNA sequences and stored as fastq files. Conventional processing pipelines consist in multiple steps including quality control, filtering, alignment of sequences against genomic catalogs (genes, species, taxonomic levels, functional pathways, etc.). These pipelines are complex to use, time consuming and rely on a large number of parameters that often provide variability and impact the estimation of the microbiome elements. Training Deep Neural Networks directly from raw sequencing data is a promising approach to bypass some of the challenges associated with mainstream bioinformatics pipelines. Most of these methods use the concept of word and sentence embeddings that create a meaningful and numerical representation of DNA sequences, while extracting features and reducing the dimensionality of the data. In this paper we present an end-to-end approach that classifies patients into disease groups directly from raw metagenomic reads: metagenome2vec. This approach is composed of four steps (i) generating a vocabulary of k-mers and learning their numerical embeddings; (ii) learning DNA sequence (read) embeddings; (iii) identifying the genome from which the sequence is most likely to come and (iv) training a multiple instance learning classifier which predicts the phenotype based on the vector representation of the raw data. An attention mechanism is applied in the network so that the model can be interpreted, assigning a weight to the influence of the prediction for each genome. Using two public real-life data-sets as well a simulated one, we demonstrated that this original approach reaches high performance, comparable with the state-of-the-art methods applied directly on processed data though mainstream bioinformatics workflows. These results are encouraging for this proof of concept work. We believe that with further dedication, the DNN models have the potential to surpass mainstream bioinformatics workflows in disease classification tasks.

Keywords: Metagenomics, phenotype prediction, deep learning, embeddings, multiple instance learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 855