Search results for: dynamic PET images
4055 Pose Normalization Network for Object Classification
Authors: Bingquan Shen
Abstract:
Convolutional Neural Networks (CNN) have demonstrated their effectiveness in synthesizing 3D views of object instances at various viewpoints. Given the problem where one have limited viewpoints of a particular object for classification, we present a pose normalization architecture to transform the object to existing viewpoints in the training dataset before classification to yield better classification performance. We have demonstrated that this Pose Normalization Network (PNN) can capture the style of the target object and is able to re-render it to a desired viewpoint. Moreover, we have shown that the PNN improves the classification result for the 3D chairs dataset and ShapeNet airplanes dataset when given only images at limited viewpoint, as compared to a CNN baseline.Keywords: convolutional neural networks, object classification, pose normalization, viewpoint invariant
Procedia PDF Downloads 3524054 Analyzing the Performance of Different Cost-Based Methods for the Corrective Maintenance of a System in Thermal Power Plants
Authors: Demet Ozgur-Unluakin, Busenur Turkali, S. Caglar Aksezer
Abstract:
Since the age of industrialization, maintenance has always been a very crucial element for all kinds of factories and plants. With today’s increasingly developing technology, the system structure of such facilities has become more complicated, and even a small operational disruption may return huge losses in profits for the companies. In order to reduce these costs, effective maintenance planning is crucial, but at the same time, it is a difficult task because of the complexity of systems. The most important aspect of correct maintenance planning is to understand the structure of the system, not to ignore the dependencies among the components and as a result, to model the system correctly. In this way, it will be better to understand which component improves the system more when it is maintained. Undoubtedly, proactive maintenance at a scheduled time reduces costs because the scheduled maintenance prohibits high losses in profits. But the necessity of corrective maintenance, which directly affects the situation of the system and provides direct intervention when the system fails, should not be ignored. When a fault occurs in the system, if the problem is not solved immediately and proactive maintenance time is awaited, this may result in increased costs. This study proposes various maintenance methods with different efficiency measures under corrective maintenance strategy on a subsystem of a thermal power plant. To model the dependencies between the components, dynamic Bayesian Network approach is employed. The proposed maintenance methods aim to minimize the total maintenance cost in a planning horizon, as well as to find the most appropriate component to be attacked on, which improves the system reliability utmost. Performances of the methods are compared under corrective maintenance strategy. Furthermore, sensitivity analysis is also applied under different cost values. Results show that all fault effect methods perform better than the replacement effect methods and this conclusion is also valid under different downtime cost values.Keywords: dynamic Bayesian networks, maintenance, multi-component systems, reliability
Procedia PDF Downloads 1284053 The Significance of Picture Mining in the Fashion and Design as a New Research Method
Authors: Katsue Edo, Yu Hiroi
Abstract:
T Increasing attention has been paid to using pictures and photographs in research since the beginning of the 21th century in social sciences. Meanwhile we have been studying the usefulness of Picture mining, which is one of the new ways for a these picture using researches. Picture Mining is an explorative research analysis method that takes useful information from pictures, photographs and static or moving images. It is often compared with the methods of text mining. The Picture Mining concept includes observational research in the broad sense, because it also aims to analyze moving images (Ochihara and Edo 2013). In the recent literature, studies and reports using pictures are increasing due to the environmental changes. These are identified as technological and social changes (Edo et.al. 2013). Low price digital cameras and i-phones, high information transmission speed, low costs for information transferring and high performance and resolution of the cameras of mobile phones have changed the photographing behavior of people. Consequently, there is less resistance in taking and processing photographs for most of the people in the developing countries. In these studies, this method of collecting data from respondents is often called as ‘participant-generated photography’ or ‘respondent-generated visual imagery’, which focuses on the collection of data and its analysis (Pauwels 2011, Snyder 2012). But there are few systematical and conceptual studies that supports it significance of these methods. We have discussed in the recent years to conceptualize these picture using research methods and formalize theoretical findings (Edo et. al. 2014). We have identified the most efficient fields of Picture mining in the following areas inductively and in case studies; 1) Research in Consumer and Customer Lifestyles. 2) New Product Development. 3) Research in Fashion and Design. Though we have found that it will be useful in these fields and areas, we must verify these assumptions. In this study we will focus on the field of fashion and design, to determine whether picture mining methods are really reliable in this area. In order to do so we have conducted an empirical research of the respondents’ attitudes and behavior concerning pictures and photographs. We compared the attitudes and behavior of pictures toward fashion to meals, and found out that taking pictures of fashion is not as easy as taking meals and food. Respondents do not often take pictures of fashion and upload their pictures online, such as Facebook and Instagram, compared to meals and food because of the difficulty of taking them. We concluded that we should be more careful in analyzing pictures in the fashion area for there still might be some kind of bias existing even if the environment of pictures have drastically changed in these years.Keywords: empirical research, fashion and design, Picture Mining, qualitative research
Procedia PDF Downloads 3634052 Multimetallic and Multiferocenyl Assemblies of Ferocenyl-Based Dithiophospohonate and Their Electrochemical Properties
Authors: J. Tomilla Ajayi, Werner E. Van Zyl
Abstract:
This work presents an overview of the reaction of 2, 4-diferrocenyl-1, 3-dithiadiphosphetane-2, 4-disulfide (Ferrocenyl Lawesson’s reagent) with water to produce the non-symmetric, ferocenyl dithiophosphonic acid respectively in high yields. These acids were readily deprotonated by anhydrous Ammonia to yield the corresponding ammonium salt NH4S2PFcOH. These were complex to Ni (II) in molar ratio 1:1 and 1:2. The resulting complex from the reaction formed same compound with different isomers (Cis and Trans) and also compound with multimetallic coordination. Quality X-ray crystals were formed from THF/Ether. The compounds were characterized by 1H, 31P NMR, and FTIR. Bulk purity were confirmed by either ESI-MS or elemental analysis and The XRD images were obtained using single crystal X-ray crystallographic studies. The electrochemical investigation of the Compounds were carried out using cyclic voltammetry.Keywords: ferrocenyl, dithiophosphonate, isomer, coordination
Procedia PDF Downloads 2484051 Deep Learning for Qualitative and Quantitative Grain Quality Analysis Using Hyperspectral Imaging
Authors: Ole-Christian Galbo Engstrøm, Erik Schou Dreier, Birthe Møller Jespersen, Kim Steenstrup Pedersen
Abstract:
Grain quality analysis is a multi-parameterized problem that includes a variety of qualitative and quantitative parameters such as grain type classification, damage type classification, and nutrient regression. Currently, these parameters require human inspection, a multitude of instruments employing a variety of sensor technologies, and predictive model types or destructive and slow chemical analysis. This paper investigates the feasibility of applying near-infrared hyperspectral imaging (NIR-HSI) to grain quality analysis. For this study two datasets of NIR hyperspectral images in the wavelength range of 900 nm - 1700 nm have been used. Both datasets contain images of sparsely and densely packed grain kernels. The first dataset contains ~87,000 image crops of bulk wheat samples from 63 harvests where protein value has been determined by the FOSS Infratec NOVA which is the golden industry standard for protein content estimation in bulk samples of cereal grain. The second dataset consists of ~28,000 image crops of bulk grain kernels from seven different wheat varieties and a single rye variety. In the first dataset, protein regression analysis is the problem to solve while variety classification analysis is the problem to solve in the second dataset. Deep convolutional neural networks (CNNs) have the potential to utilize spatio-spectral correlations within a hyperspectral image to simultaneously estimate the qualitative and quantitative parameters. CNNs can autonomously derive meaningful representations of the input data reducing the need for advanced preprocessing techniques required for classical chemometric model types such as artificial neural networks (ANNs) and partial least-squares regression (PLS-R). A comparison between different CNN architectures utilizing 2D and 3D convolution is conducted. These results are compared to the performance of ANNs and PLS-R. Additionally, a variety of preprocessing techniques from image analysis and chemometrics are tested. These include centering, scaling, standard normal variate (SNV), Savitzky-Golay (SG) filtering, and detrending. The results indicate that the combination of NIR-HSI and CNNs has the potential to be the foundation for an automatic system unifying qualitative and quantitative grain quality analysis within a single sensor technology and predictive model type.Keywords: deep learning, grain analysis, hyperspectral imaging, preprocessing techniques
Procedia PDF Downloads 994050 Strategies for Synchronizing Chocolate Conching Data Using Dynamic Time Warping
Authors: Fernanda A. P. Peres, Thiago N. Peres, Flavio S. Fogliatto, Michel J. Anzanello
Abstract:
Batch processes are widely used in food industry and have an important role in the production of high added value products, such as chocolate. Process performance is usually described by variables that are monitored as the batch progresses. Data arising from these processes are likely to display a strong correlation-autocorrelation structure, and are usually monitored using control charts based on multiway principal components analysis (MPCA). Process control of a new batch is carried out comparing the trajectories of its relevant process variables with those in a reference set of batches that yielded products within specifications; it is clear that proper determination of the reference set is key for the success of a correct signalization of non-conforming batches in such quality control schemes. In chocolate manufacturing, misclassifications of non-conforming batches in the conching phase may lead to significant financial losses. In such context, the accuracy of process control grows in relevance. In addition to that, the main assumption in MPCA-based monitoring strategies is that all batches are synchronized in duration, both the new batch being monitored and those in the reference set. Such assumption is often not satisfied in chocolate manufacturing process. As a consequence, traditional techniques as MPCA-based charts are not suitable for process control and monitoring. To address that issue, the objective of this work is to compare the performance of three dynamic time warping (DTW) methods in the alignment and synchronization of chocolate conching process variables’ trajectories, aimed at properly determining the reference distribution for multivariate statistical process control. The power of classification of batches in two categories (conforming and non-conforming) was evaluated using the k-nearest neighbor (KNN) algorithm. Real data from a milk chocolate conching process was collected and the following variables were monitored over time: frequency of soybean lecithin dosage, rotation speed of the shovels, current of the main motor of the conche, and chocolate temperature. A set of 62 batches with durations between 495 and 1,170 minutes was considered; 53% of the batches were known to be conforming based on lab test results and experts’ evaluations. Results showed that all three DTW methods tested were able to align and synchronize the conching dataset. However, synchronized datasets obtained from these methods performed differently when inputted in the KNN classification algorithm. Kassidas, MacGregor and Taylor’s (named KMT) method was deemed the best DTW method for aligning and synchronizing a milk chocolate conching dataset, presenting 93.7% accuracy, 97.2% sensitivity and 90.3% specificity in batch classification, being considered the best option to determine the reference set for the milk chocolate dataset. Such method was recommended due to the lowest number of iterations required to achieve convergence and highest average accuracy in the testing portion using the KNN classification technique.Keywords: batch process monitoring, chocolate conching, dynamic time warping, reference set distribution, variable duration
Procedia PDF Downloads 1674049 Design, Analysis and Obstacle Avoidance Control of an Electric Wheelchair with Sit-Sleep-Seat Elevation Functions
Authors: Waleed Ahmed, Huang Xiaohua, Wilayat Ali
Abstract:
The wheelchair users are generally exposed to physical and psychological health problems, e.g., pressure sores and pain in the hip joint, associated with seating posture or being inactive in a wheelchair for a long time. Reclining Wheelchair with back, thigh, and leg adjustment helps in daily life activities and health preservation. The seat elevating function of an electric wheelchair allows the user (lower limb amputation) to reach different heights. An electric wheelchair is expected to ease the lives of the elderly and disable people by giving them mobility support and decreasing the percentage of accidents caused by users’ narrow sight or joystick operation errors. Thus, this paper proposed the design, analysis and obstacle avoidance control of an electric wheelchair with sit-sleep-seat elevation functions. A 3D model of a wheelchair is designed in SolidWorks that was later used for multi-body dynamic (MBD) analysis and to verify driving control system. The control system uses the fuzzy algorithm to avoid the obstacle by getting information in the form of distance from the ultrasonic sensor and user-specified direction from the joystick’s operation. The proposed fuzzy driving control system focuses on the direction and velocity of the wheelchair. The wheelchair model has been examined and proven in MSC Adams (Automated Dynamic Analysis of Mechanical Systems). The designed fuzzy control algorithm is implemented on Gazebo robotic 3D simulator using Robotic Operating System (ROS) middleware. The proposed wheelchair design enhanced mobility and quality of life by improving the user’s functional capabilities. Simulation results verify the non-accidental behavior of the electric wheelchair.Keywords: fuzzy logic control, joystick, multi body dynamics, obstacle avoidance, scissor mechanism, sensor
Procedia PDF Downloads 1294048 3 Dimensional (3D) Assesment of Hippocampus in Alzheimer’s Disease
Authors: Mehmet Bulent Ozdemir, Sultan Çagirici, Sahika Pinar Akyer, Fikri Turk
Abstract:
Neuroanatomical appearance can be correlated with clinical or other characteristics of illness. With the introduction of diagnostic imaging machines, producing 3D images of anatomic structures, calculating the correlation between subjects and pattern of the structures have become possible. The aim of this study is to examine the 3D structure of hippocampus in cases with Alzheimer disease in different dementia severity. For this purpose, 62 female and 38 male- 68 patients’s (age range between 52 and 88) MR scanning were imported to the computer. 3D model of each right and left hippocampus were developed by a computer aided propramme-Surf Driver 3.5. Every reconstruction was taken by the same investigator. There were different apperance of hippocampus from normal to abnormal. In conclusion, These results might improve the understanding of the correlation between the morphological changes in hippocampus and clinical staging in Alzheimer disease.Keywords: Alzheimer disease, hippocampus, computer-assisted anatomy, 3D
Procedia PDF Downloads 4814047 The Impact of the Covid-19 Crisis on the Information Behavior in the B2B Buying Process
Authors: Stehr Melanie
Abstract:
The availability of apposite information is essential for the decision-making process of organizational buyers. Due to the constraints of the Covid-19 crisis, information channels that emphasize face-to-face contact (e.g. sales visits, trade shows) have been unavailable, and usage of digitally-driven information channels (e.g. videoconferencing, platforms) has skyrocketed. This paper explores the question in which areas the pandemic induced shift in the use of information channels could be sustainable and in which areas it is a temporary phenomenon. While information and buying behavior in B2C purchases has been regularly studied in the last decade, the last fundamental model of organizational buying behavior in B2B was introduced by Johnston and Lewin (1996) in times before the advent of the internet. Subsequently, research efforts in B2B marketing shifted from organizational buyers and their decision and information behavior to the business relationships between sellers and buyers. This study builds on the extensive literature on situational factors influencing organizational buying and information behavior and uses the economics of information theory as a theoretical framework. The research focuses on the German woodworking industry, which before the Covid-19 crisis was characterized by a rather low level of digitization of information channels. By focusing on an industry with traditional communication structures, a shift in information behavior induced by an exogenous shock is considered a ripe research setting. The study is exploratory in nature. The primary data source is 40 in-depth interviews based on the repertory-grid method. Thus, 120 typical buying situations in the woodworking industry and the information and channels relevant to them are identified. The results are combined into clusters, each of which shows similar information behavior in the procurement process. In the next step, the clusters are analyzed in terms of the post and pre-Covid-19 crisis’ behavior identifying stable and dynamic information behavior aspects. Initial results show that, for example, clusters representing search goods with low risk and complexity suggest a sustainable rise in the use of digitally-driven information channels. However, in clusters containing trust goods with high significance and novelty, an increased return to face-to-face information channels can be expected after the Covid-19 crisis. The results are interesting from both a scientific and a practical point of view. This study is one of the first to apply the economics of information theory to organizational buyers and their decision and information behavior in the digital information age. Especially the focus on the dynamic aspects of information behavior after an exogenous shock might contribute new impulses to theoretical debates related to the economics of information theory. For practitioners - especially suppliers’ marketing managers and intermediaries such as publishers or trade show organizers from the woodworking industry - the study shows wide-ranging starting points for a future-oriented segmentation of their marketing program by highlighting the dynamic and stable preferences of elaborated clusters in the choice of their information channels.Keywords: B2B buying process, crisis, economics of information theory, information channel
Procedia PDF Downloads 1844046 Complexity in a Leslie-Gower Delayed Prey-Predator Model
Authors: Anuraj Singh
Abstract:
The complex dynamics is explored in a prey predator system with multiple delays. The predator dynamics is governed by Leslie-Gower scheme. The existence of periodic solutions via Hopf bifurcation with respect to delay parameters is established. To substantiate analytical findings, numerical simulations are performed. The system shows rich dynamic behavior including chaos and limit cycles.Keywords: chaos, Hopf bifurcation, stability, time delay
Procedia PDF Downloads 3264045 Influence of Flexible Plate's Contour on Dynamic Behavior of High Speed Flexible Coupling of Combat Aircraft
Authors: Dineshsingh Thakur, S. Nagesh, J. Basha
Abstract:
A lightweight High Speed Flexible Coupling (HSFC) is used to connect the Engine Gear Box (EGB) with an Accessory Gear Box (AGB) of the combat aircraft. The HSFC transmits the power at high speeds ranging from 10000 to 18000 rpm from the EGB to AGB. The HSFC is also accommodates larger misalignments resulting from thermal expansion of the aircraft engine and mounting arrangement. The HSFC has the series of metallic contoured annular thin cross-sectioned flexible plates to accommodate the misalignments. The flexible plates are accommodating the misalignment by the elastic material flexure. As the HSFC operates at higher speed, the flexural and axial resonance frequencies are to be kept away from the operating speed and proper prediction is required to prevent failure in the transmission line of a single engine fighter aircraft. To study the influence of flexible plate’s contour on the lateral critical speed (LCS) of HSFC, a mathematical model of HSFC as a elven rotor system is developed. The flexible plate being the bending member of the system, its bending stiffness which results from the contoured governs the LCS. Using transfer matrix method, Influence of various flexible plate contours on critical speed is analyzed. In the above analysis, the support bearing flexibility on critical speed prediction is also considered. Based on the study, a model is built with the optimum contour of flexible plate, for validation by experimental modal analysis. A good correlation between the theoretical prediction and model behavior is observed. From the study, it is found that the flexible plate’s contour is playing vital role in modification of system’s dynamic behavior and the present model can be extended for the development of similar type of flexible couplings for its computational simplicity and reliability.Keywords: flexible rotor, critical speed, experimental modal analysis, high speed flexible coupling (HSFC), misalignment
Procedia PDF Downloads 2154044 Multi-Focus Image Fusion Using SFM and Wavelet Packet
Authors: Somkait Udomhunsakul
Abstract:
In this paper, a multi-focus image fusion method using Spatial Frequency Measurements (SFM) and Wavelet Packet was proposed. The proposed fusion approach, firstly, the two fused images were transformed and decomposed into sixteen subbands using Wavelet packet. Next, each subband was partitioned into sub-blocks and each block was identified the clearer regions by using the Spatial Frequency Measurement (SFM). Finally, the recovered fused image was reconstructed by performing the Inverse Wavelet Transform. From the experimental results, it was found that the proposed method outperformed the traditional SFM based methods in terms of objective and subjective assessments.Keywords: multi-focus image fusion, wavelet packet, spatial frequency measurement
Procedia PDF Downloads 4744043 Construction of a Dynamic Migration Model of Extracellular Fluid in Brain for Future Integrated Control of Brain State
Authors: Tomohiko Utsuki, Kyoka Sato
Abstract:
In emergency medicine, it is recognized that brain resuscitation is very important for the reduction of mortality rate and neurological sequelae. Especially, the control of brain temperature (BT), intracranial pressure (ICP), and cerebral blood flow (CBF) are most required for stabilizing brain’s physiological state in the treatment for such as brain injury, stroke, and encephalopathy. However, the manual control of BT, ICP, and CBF frequently requires the decision and operation of medical staff, relevant to medication and the setting of therapeutic apparatus. Thus, the integration and the automation of the control of those is very effective for not only improving therapeutic effect but also reducing staff burden and medical cost. For realizing such integration and automation, a mathematical model of brain physiological state is necessary as the controlled object in simulations, because the performance test of a prototype of the control system using patients is not ethically allowed. A model of cerebral blood circulation has already been constructed, which is the most basic part of brain physiological state. Also, a migration model of extracellular fluid in brain has been constructed, however the condition that the total volume of intracranial cavity is almost changeless due to the hardness of cranial bone has not been considered in that model. Therefore, in this research, the dynamic migration model of extracellular fluid in brain was constructed on the consideration of the changelessness of intracranial cavity’s total volume. This model is connectable to the cerebral blood circulation model. The constructed model consists of fourteen compartments, twelve of which corresponds to perfused area of bilateral anterior, middle and posterior cerebral arteries, the others corresponds to cerebral ventricles and subarachnoid space. This model enable to calculate the migration of tissue fluid from capillaries to gray matter and white matter, the flow of tissue fluid between compartments, the production and absorption of cerebrospinal fluid at choroid plexus and arachnoid granulation, and the production of metabolic water. Further, the volume, the colloid concentration, and the tissue pressure of/in each compartment are also calculable by solving 40-dimensional non-linear simultaneous differential equations. In this research, the obtained model was analyzed for its validation under the four condition of a normal adult, an adult with higher cerebral capillary pressure, an adult with lower cerebral capillary pressure, and an adult with lower colloid concentration in cerebral capillary. In the result, calculated fluid flow, tissue volume, colloid concentration, and tissue pressure were all converged to suitable value for the set condition within 60 minutes at a maximum. Also, because these results were not conflict with prior knowledge, it is certain that the model can enough represent physiological state of brain under such limited conditions at least. One of next challenges is to integrate this model and the already constructed cerebral blood circulation model. This modification enable to simulate CBF and ICP more precisely due to calculating the effect of blood pressure change to extracellular fluid migration and that of ICP change to CBF.Keywords: dynamic model, cerebral extracellular migration, brain resuscitation, automatic control
Procedia PDF Downloads 1564042 Relation between Initial Stability of the Dental Implant and Bone-Implant Contact Level
Authors: Jui-Ting Hsu, Heng-Li Huang, Ming-Tzu Tsai, Kuo-Chih Su, Lih-Jyh Fuh
Abstract:
The objectives of this study were to measure the initial stability of the dental implant (ISQ and PTV) in the artificial foam bone block with three different quality levels. In addition, the 3D bone to implant contact percentage (BIC%) was measured based on the micro-computed tomography images. Furthermore, the relation between the initial stability of dental implant (ISQ and PTV) and BIC% were calculated. The experimental results indicated that enhanced the material property of the artificial foam bone increased the initial stability of the dental implant. The Pearson’s correlation coefficient between the BIC% and the two approaches (ISQ and PTV) were 0.652 and 0.745.Keywords: dental implant, implant stability quotient, peak insertion torque, bone-implant contact, micro-computed tomography
Procedia PDF Downloads 5804041 Tuning the Surface Roughness of Patterned Nanocellulose Films: An Alternative to Plastic Based Substrates for Circuit Priniting in High-Performance Electronics
Authors: Kunal Bhardwaj, Christine Browne
Abstract:
With the increase in global awareness of the environmental impacts of plastic-based products, there has been a massive drive to reduce our use of these products. Use of plastic-based substrates in electronic circuits has been a matter of concern recently. Plastics provide a very smooth and cheap surface for printing high-performance electronics due to their non-permeability to ink and easy mouldability. In this research, we explore the use of nano cellulose (NC) films in electronics as they provide an advantage of being 100% recyclable and eco-friendly. The main hindrance in the mass adoption of NC film as a substitute for plastic is its higher surface roughness which leads to ink penetration, and dispersion in the channels on the film. This research was conducted to tune the RMS roughness of NC films to a range where they can replace plastics in electronics(310-470nm). We studied the dependence of the surface roughness of the NC film on the following tunable aspects: 1) composition by weight of the NC suspension that is sprayed on a silicon wafer 2) the width and the depth of the channels on the silicon wafer used as a base. Various silicon wafers with channel depths ranging from 6 to 18 um and channel widths ranging from 5 to 500um were used as a base. Spray coating method for NC film production was used and two solutions namely, 1.5wt% NC and a 50-50 NC-CNC (cellulose nanocrystal) mixture in distilled water, were sprayed through a Wagner sprayer system model 117 at an angle of 90 degrees. The silicon wafer was kept on a conveyor moving at a velocity of 1.3+-0.1 cm/sec. Once the suspension was uniformly sprayed, the mould was left to dry in an oven at 50°C overnight. The images of the films were taken with the help of an optical profilometer, Olympus OLS 5000. These images were converted into a ‘.lext’ format and analyzed using Gwyddion, a data and image analysis software. Lowest measured RMS roughness of 291nm was with a 50-50 CNC-NC mixture, sprayed on a silicon wafer with a channel width of 5 µm and a channel depth of 12 µm. Surface roughness values of 320+-17nm were achieved at lower (5 to 10 µm) channel widths on a silicon wafer. This research opened the possibility of the usage of 100% recyclable NC films with an additive (50% CNC) in high-performance electronics. Possibility of using additives like Carboxymethyl Cellulose (CMC) is also being explored due to the hypothesis that CMC would reduce friction amongst fibers, which in turn would lead to better conformations amongst the NC fibers. CMC addition would thus be able to help tune the surface roughness of the NC film to an even greater extent in future.Keywords: nano cellulose films, electronic circuits, nanocrystals and surface roughness
Procedia PDF Downloads 1244040 A Note on the Fractal Dimension of Mandelbrot Set and Julia Sets in Misiurewicz Points
Authors: O. Boussoufi, K. Lamrini Uahabi, M. Atounti
Abstract:
The main purpose of this paper is to calculate the fractal dimension of some Julia Sets and Mandelbrot Set in the Misiurewicz Points. Using Matlab to generate the Julia Sets images that match the Misiurewicz points and using a Fractal software, we were able to find different measures that characterize those fractals in textures and other features. We are actually focusing on fractal dimension and the error calculated by the software. When executing the given equation of regression or the log-log slope of image a Box Counting method is applied to the entire image, and chosen settings are available in a FracLAc Program. Finally, a comparison is done for each image corresponding to the area (boundary) where Misiurewicz Point is located.Keywords: box counting, FracLac, fractal dimension, Julia Sets, Mandelbrot Set, Misiurewicz Points
Procedia PDF Downloads 2164039 Research of the Load Bearing Capacity of Inserts Embedded in CFRP under Different Loading Conditions
Authors: F. Pottmeyer, M. Weispfenning, K. A. Weidenmann
Abstract:
Continuous carbon fiber reinforced plastics (CFRP) exhibit a high application potential for lightweight structures due to their outstanding specific mechanical properties. Embedded metal elements, so-called inserts, can be used to join structural CFRP parts. Drilling of the components to be joined can be avoided using inserts. In consequence, no bearing stress is anticipated. This is a distinctive benefit of embedded inserts, since continuous CFRP have low shear and bearing strength. This paper aims at the investigation of the load bearing capacity after preinduced damages from impact tests and thermal-cycling. In addition, characterization of mechanical properties during dynamic high speed pull-out testing under different loading velocities was conducted. It has been shown that the load bearing capacity increases up to 100% for very high velocities (15 m/s) in comparison with quasi-static loading conditions (1.5 mm/min). Residual strength measurements identified the influence of thermal loading and preinduced mechanical damage. For both, the residual strength was evaluated afterwards by quasi-static pull-out tests. Taking into account the DIN EN 6038 a high decrease of force occurs at impact energy of 16 J with significant damage of the laminate. Lower impact energies of 6 J, 9 J, and 12 J do not decrease the measured residual strength, although the laminate is visibly damaged - distinguished by cracks on the rear side. To evaluate the influence of thermal loading, the specimens were placed in a climate chamber and were exposed to various numbers of temperature cycles. One cycle took 1.5 hours from -40 °C to +80 °C. It could be shown that already 10 temperature cycles decrease the load bearing capacity up to 20%. Further reduction of the residual strength with increasing number of thermal cycles was not observed. Thus, it implies that the maximum damage of the composite is already induced after 10 temperature cycles.Keywords: composite, joining, inserts, dynamic loading, thermal loading, residual strength, impact
Procedia PDF Downloads 2794038 Supplementing Aerial-Roving Surveys with Autonomous Optical Cameras: A High Temporal Resolution Approach to Monitoring and Estimating Effort within a Recreational Salmon Fishery in British Columbia, Canada
Authors: Ben Morrow, Patrick O'Hara, Natalie Ban, Tunai Marques, Molly Fraser, Christopher Bone
Abstract:
Relative to commercial fisheries, recreational fisheries are often poorly understood and pose various challenges for monitoring frameworks. In British Columbia (BC), Canada, Pacific salmon are heavily targeted by recreational fishers while also being a key source of nutrient flow and crucial prey for a variety of marine and terrestrial fauna, including endangered Southern Resident killer whales (Orcinus orca). Although commercial fisheries were historically responsible for the majority of salmon retention, recreational fishing now comprises both greater effort and retention. The current monitoring scheme for recreational salmon fisheries involves aerial-roving creel surveys. However, this method has been identified as costly and having low predictive power as it is often limited to sampling fragments of fluid and temporally dynamic fisheries. This study used imagery from two shore-based autonomous cameras in a highly active recreational fishery around Sooke, BC, and evaluated their efficacy in supplementing existing aerial-roving surveys for monitoring a recreational salmon fishery. This study involved continuous monitoring and high temporal resolution (over one million images analyzed in a single fishing season), using a deep learning-based vessel detection algorithm and a custom image annotation tool to efficiently thin datasets. This allowed for the quantification of peak-season effort from a busy harbour, species-specific retention estimates, high levels of detected fishing events at a nearby popular fishing location, as well as the proportion of the fishery management area represented by cameras. Then, this study demonstrated how it could substantially enhance the temporal resolution of a fishery through diel activity pattern analyses, scaled monthly to visualize clusters of activity. This work also highlighted considerable off-season fishing detection, currently unaccounted for in the existing monitoring framework. These results demonstrate several distinct applications of autonomous cameras for providing enhanced detail currently unavailable in the current monitoring framework, each of which has important considerations for the managerial allocation of resources. Further, the approach and methodology can benefit other studies that apply shore-based camera monitoring, supplement aerial-roving creel surveys to improve fine-scale temporal understanding, inform the optimal timing of creel surveys, and improve the predictive power of recreational stock assessments to preserve important and endangered fish species.Keywords: cameras, monitoring, recreational fishing, stock assessment
Procedia PDF Downloads 1224037 Muslim Women and Gender Justice Facts and Reality: An Indian Scenario
Authors: Asmita A. Vaidya, Shahista S. Inamdar
Abstract:
Society is dynamic, in this changing and development processes, Indian Muslim women where no exception to this social change. Islam has elevated her status from being chattels/commodity to individual human being having separate legal personality and equal to that of men but in India, even two women are not equal in availing their matrimonial rights and remedies, separate personal laws are applicable to them and thus gender justice is a fragile myth.Keywords: Muslim women, gender justice, polygamy, Islamic jurisprudence, equality
Procedia PDF Downloads 5124036 Molecular Dynamic Simulation of CO2 Absorption into Mixed Aqueous Solutions MDEA/PZ
Authors: N. Harun, E. E. Masiren, W. H. W. Ibrahim, F. Adam
Abstract:
Amine absorption process is an approach for mitigation of CO2 from flue gas that produces from power plant. This process is the most common system used in chemical and oil industries for gas purification to remove acid gases. On the challenges of this process is high energy requirement for solvent regeneration to release CO2. In the past few years, mixed alkanolamines have received increasing attention. In most cases, the mixtures contain N-methyldiethanolamine (MDEA) as the base amine with the addition of one or two more reactive amines such as PZ. The reason for the application of such blend amine is to take advantage of high reaction rate of CO2 with the activator combined with the advantages of the low heat of regeneration of MDEA. Several experimental and simulation studies have been undertaken to understand this process using blend MDEA/PZ solvent. Despite those studies, the mechanism of CO2 absorption into the aqueous MDEA is not well understood and available knowledge within the open literature is limited. The aim of this study is to investigate the intermolecular interaction of the blend MDEA/PZ using Molecular Dynamics (MD) simulation. MD simulation was run under condition 313K and 1 atm using NVE ensemble at 200ps and NVT ensemble at 1ns. The results were interpreted in term of Radial Distribution Function (RDF) analysis through two system of interest i.e binary and tertiary. The binary system will explain the interaction between amine and water molecule while tertiary system used to determine the interaction between the amine and CO2 molecule. For the binary system, it was observed that the –OH group of MDEA is more attracted to water molecule compared to –NH group of MDEA. The –OH group of MDEA can form the hydrogen bond with water that will assist the solubility of MDEA in water. The intermolecular interaction probability of –OH and –NH group of MDEA with CO2 in blended MDEA/PZ is higher than using single MDEA. This findings show that PZ molecule act as an activator to promote the intermolecular interaction between MDEA and CO2.Thus, blend of MDEA with PZ is expecting to increase the absorption rate of CO2 and reduce the heat regeneration requirement.Keywords: amine absorption process, blend MDEA/PZ, CO2 capture, molecular dynamic simulation, radial distribution function
Procedia PDF Downloads 2954035 Automatic Segmentation of Lung Pleura Based On Curvature Analysis
Authors: Sasidhar B., Bhaskar Rao N., Ramesh Babu D. R., Ravi Shankar M.
Abstract:
Segmentation of lung pleura is a preprocessing step in Computer-Aided Diagnosis (CAD) which helps in reducing false positives in detection of lung cancer. The existing methods fail in extraction of lung regions with the nodules at the pleura of the lungs. In this paper, a new method is proposed which segments lung regions with nodules at the pleura of the lungs based on curvature analysis and morphological operators. The proposed algorithm is tested on 06 patient’s dataset which consists of 60 images of Lung Image Database Consortium (LIDC) and the results are found to be satisfactory with 98.3% average overlap measure (AΩ).Keywords: curvature analysis, image segmentation, morphological operators, thresholding
Procedia PDF Downloads 5964034 Lactate in Critically Ill Patients an Outcome Marker with Time
Authors: Sherif Sabri, Suzy Fawzi, Sanaa Abdelshafy, Ayman Nagah
Abstract:
Introduction: Static derangements in lactate homeostasis during ICU stay have become established as a clinically useful marker of increased risk of hospital and ICU mortality. Lactate indices or kinetic alteration of the anaerobic metabolism make it a potential parameter to evaluate disease severity and intervention adequacy. This is an inexpensive and simple clinical parameter that can be obtained by a minimally invasive means. Aim of work: Comparing the predictive value of dynamic indices of hyperlactatemia in the first twenty four hours of intensive care unit (ICU) admission with other static values are more commonly used. Patients and Methods: This study included 40 critically ill patients above 18 years old of both sexes with Hyperlactamia (≥ 2 m mol/L). Patients were divided into septic group (n=20) and low oxygen transport group (n=20), which include all causes of low-O2. Six lactate indices specifically relating to the first 24 hours of ICU admission were considered, three static indices and three dynamic indices. Results: There were no statistically significant differences among the two groups regarding age, most of the laboratory results including ABG and the need for mechanical ventilation. Admission lactate was significantly higher in low-oxygen transport group than the septic group [37.5±11.4 versus 30.6±7.8 P-value 0.034]. Maximum lactate was significantly higher in low-oxygen transport group than the septic group P-value (0.044). On the other hand absolute lactate (mg) was higher in septic group P-value (< 0.001). Percentage change of lactate was higher in the septic group (47.8±11.3) than the low-oxygen transport group (26.1±12.6) with highly significant P-value (< 0.001). Lastly, time weighted lactate was higher in the low-oxygen transport group (1.72±0.81) than the septic group (1.05±0.8) with significant P-value (0.012). There were statistically significant differences regarding lactate indices in survivors and non survivors, whether in septic or low-oxygen transport group. Conclusion: In critically ill patients, time weighted lactate and percent in lactate change in the first 24 hours can be an independent predictive factor in ICU mortality. Also, a rising compared to a falling blood lactate concentration over the first 24 hours can be associated with significant increase in the risk of mortality.Keywords: critically ill patients, lactate indices, mortality in intensive care, anaerobic metabolism
Procedia PDF Downloads 2424033 Biimodal Biometrics System Using Fusion of Iris and Fingerprint
Authors: Attallah Bilal, Hendel Fatiha
Abstract:
This paper proposes the bimodal biometrics system for identity verification iris and fingerprint, at matching score level architecture using weighted sum of score technique. The features are extracted from the pre processed images of iris and fingerprint. These features of a query image are compared with those of a database image to obtain matching scores. The individual scores generated after matching are passed to the fusion module. This module consists of three major steps i.e., normalization, generation of similarity score and fusion of weighted scores. The final score is then used to declare the person as genuine or an impostor. The system is tested on CASIA database and gives an overall accuracy of 91.04% with FAR of 2.58% and FRR of 8.34%.Keywords: iris, fingerprint, sum rule, fusion
Procedia PDF Downloads 3684032 The Effect of Political Characteristics on the Budget Balance of Local Governments: A Dynamic System Generalized Method of Moments Data Approach
Authors: Stefanie M. Vanneste, Stijn Goeminne
Abstract:
This paper studies the effect of political characteristics of 308 Flemish municipalities on their budget balance in the period 1995-2011. All local governments experience the same economic and financial setting, however some governments have high budget balances, while others have low budget balances. The aim of this paper is to explain the differences in municipal budget balances by a number of economic, socio-demographic and political variables. The economic and socio-demographic variables will be used as control variables, while the focus of this paper will be on the political variables. We test four hypotheses resulting from the literature, namely (i) the partisan hypothesis tests if left wing governments have lower budget balances, (ii) the fragmentation hypothesis stating that more fragmented governments have lower budget balances, (iii) the hypothesis regarding the power of the government, higher powered governments would resolve in higher budget balances, and (iv) the opportunistic budget cycle to test whether politicians manipulate the economic situation before elections in order to maximize their reelection possibilities and therefore have lower budget balances before elections. The contributions of our paper to the existing literature are multiple. First, we use the whole array of political variables and not just a selection of them. Second, we are dealing with a homogeneous database with the same budget and election rules, making it easier to focus on the political factors without having to control for the impact of differences in the political systems. Third, our research extends the existing literature on Flemish municipalities as this is the first dynamic research on local budget balances. We use a dynamic panel data model. Because of the two lagged dependent variables as explanatory variables, we employ the system GMM (Generalized Method of Moments) estimator. This is the best possible estimator as we are dealing with political panel data that is rather persistent. Our empirical results show that the effect of the ideological position and the power of the coalition are of less importance to explain the budget balance. The political fragmentation of the government on the other hand has a negative and significant effect on the budget balance. The more parties in a coalition the worse the budget balance is ceteris paribus. Our results also provide evidence of an opportunistic budget cycle, the budget balances are lower in pre-election years relative to the other years to try and increase the incumbents reelection possibilities. An additional finding is that the incremental effect of the budget balance is very important and should not be ignored like is being done in a lot of empirical research. The coefficients of the lagged dependent variables are always positive and very significant. This proves that the budget balance is subject to incrementalism. It is not possible to change the entire policy from one year to another so the actions taken in recent past years still have an impact on the current budget balance. Only a relatively small amount of research concerning the budget balance takes this considerable incremental effect into account. Our findings survive several robustness checks.Keywords: budget balance, fragmentation, ideology, incrementalism, municipalities, opportunistic budget cycle, panel data, political characteristics, power, system GMM
Procedia PDF Downloads 2994031 The Effects of the Waste Plastic Modification of the Asphalt Mixture on the Permanent Deformation
Authors: Soheil Heydari, Ailar Hajimohammadi, Nasser Khalili
Abstract:
The application of plastic waste for asphalt modification is a sustainable strategy to deal with the enormous plastic waste generated each year and enhance the properties of asphalt. The modification is either practiced by the dry process or the wet process. In the dry process, plastics are added straight into the asphalt mixture, and in the wet process, they are mixed and digested into bitumen. In this article, the effects of plastic inclusion in asphalt mixture, through the dry process, on the permanent deformation of the asphalt are investigated. The main waste plastics that are usually used in asphalt modification are taken into account, which is linear, low-density polyethylene, low-density polyethylene, high-density polyethylene, and polypropylene. Also, to simulate a plastic waste stream, different grades of each virgin plastic are mixed and used. For instance, four different grades of polypropylene are mixed and used as representative of polypropylene. A precisely designed mixing condition is considered to dry-mix the plastics into the mixture such that the polymer was melted and modified by the later introduced binder. In this mixing process, plastics are first added to the hot aggregates and mixed three times in different time intervals, then bitumen is introduced, and the whole mixture is mixed three times in fifteen minutes intervals. Marshall specimens were manufactured, and dynamic creep tests were conducted to evaluate the effects of modification on the permanent deformation of the asphalt mixture. Dynamic creep is a common repeated loading test conducted at different stress levels and temperatures. Loading cycles are applied to the AC specimen until failure occurs; with the amount of deformation constantly recorded, the cumulative, permanent strain is determined and reported as a function of the number of cycles. The results of this study showed that the dry inclusion of the waste plastics is very effective in enhancing the resistance against permanent deformation of the mixture. However, the mixing process must be precisely engineered to melt the plastics, and a homogenous mixture is achieved.Keywords: permanent deformation, waste plastics, low-density polyethene, high-density polyethene, polypropylene, linear low-density polyethene, dry process
Procedia PDF Downloads 884030 The Processing of Context-Dependent and Context-Independent Scalar Implicatures
Authors: Liu Jia’nan
Abstract:
The default accounts hold the view that there exists a kind of scalar implicature which can be processed without context and own a psychological privilege over other scalar implicatures which depend on context. In contrast, the Relevance Theorist regards context as a must because all the scalar implicatures have to meet the need of relevance in discourse. However, in Katsos, the experimental results showed: Although quantitatively the adults rejected under-informative utterance with lexical scales (context-independent) and the ad hoc scales (context-dependent) at almost the same rate, adults still regarded the violation of utterance with lexical scales much more severe than with ad hoc scales. Neither default account nor Relevance Theory can fully explain this result. Thus, there are two questionable points to this result: (1) Is it possible that the strange discrepancy is due to other factors instead of the generation of scalar implicature? (2) Are the ad hoc scales truly formed under the possible influence from mental context? Do the participants generate scalar implicatures with ad hoc scales instead of just comparing semantic difference among target objects in the under- informative utterance? In my Experiment 1, the question (1) will be answered by repetition of Experiment 1 by Katsos. Test materials will be showed by PowerPoint in the form of pictures, and each procedure will be done under the guidance of a tester in a quiet room. Our Experiment 2 is intended to answer question (2). The test material of picture will be transformed into the literal words in DMDX and the target sentence will be showed word-by-word to participants in the soundproof room in our lab. Reading time of target parts, i.e. words containing scalar implicatures, will be recorded. We presume that in the group with lexical scale, standardized pragmatically mental context would help generate scalar implicature once the scalar word occurs, which will make the participants hope the upcoming words to be informative. Thus if the new input after scalar word is under-informative, more time will be cost for the extra semantic processing. However, in the group with ad hoc scale, scalar implicature may hardly be generated without the support from fixed mental context of scale. Thus, whether the new input is informative or not does not matter at all, and the reading time of target parts will be the same in informative and under-informative utterances. People’s mind may be a dynamic system, in which lots of factors would co-occur. If Katsos’ experimental result is reliable, will it shed light on the interplay of default accounts and context factors in scalar implicature processing? We might be able to assume, based on our experiments, that one single dominant processing paradigm may not be plausible. Furthermore, in the processing of scalar implicature, the semantic interpretation and the pragmatic interpretation may be made in a dynamic interplay in the mind. As to the lexical scale, the pragmatic reading may prevail over the semantic reading because of its greater exposure in daily language use, which may also lead the possible default or standardized paradigm override the role of context. However, those objects in ad hoc scale are not usually treated as scalar membership in mental context, and thus lexical-semantic association of the objects may prevent their pragmatic reading from generating scalar implicature. Only when the sufficient contextual factors are highlighted, can the pragmatic reading get privilege and generate scalar implicature.Keywords: scalar implicature, ad hoc scale, dynamic interplay, default account, Mandarin Chinese processing
Procedia PDF Downloads 3234029 Design and Biomechanical Analysis of a Transtibial Prosthesis for Cyclists of the Colombian Team Paralympic
Authors: Jhonnatan Eduardo Zamudio Palacios, Oscar Leonardo Mosquera Dussan, Daniel Guzman Perez, Daniel Alfonso Botero Rosas, Oscar Fabian Rubiano Espinosa, Jose Antonio Garcia Torres, Ivan Dario Chavarro, Ivan Ramiro Rodriguez Camacho, Jaime Orlando Rodriguez
Abstract:
The training of cilsitas with some type of disability finds in the technological development an indispensable ally, generating every day advances to contribute to the quality of life allowing to maximize the capacities of the athletes. The performance of a cyclist depends on physiological and biomechanical factors, such as aerodynamic profile, bicycle measurements, connecting rod length, pedaling systems, type of competition, among others. This study particularly focuses on the description of the dynamic model of a transtibial prosthesis for Paralympic cyclists. To make the model, two points are chosen: in the radius centers of rotation of the plate and pinion of the track bicycle. The parametric scheme of the track bike represents a model of 6 degrees of freedom due to the displacement in X - Y of each of the reference points of the angles of the curve profile β, cant of the velodrome α and the angle of rotation of the connecting rod φ. The force exerted on the crank of the bicycle varies according to the angles of the curve profile β, the velodrome cant of α and the angle of rotation of the crank φ. The behavior is analyzed through the Matlab R2015a software. The average strength that a cyclist exerts on the cranks of a bicycle is 1,607.1 N, the Paralympic cyclist must perform a force on each crank about 803.6 N. Once the maximum force associated with the movement has been determined, it is continued to the dynamic modeling of the transtibial prosthesis that represents a model of 6 degrees of freedom with displacement in X - Y in relation to the angles of rotation of the hip π, knee γ and ankle λ. Subsequently, an analysis of the kinematic behavior of the prosthesis was carried out by means of SolidWorks 2017 and Matlab R2015a, which was used to model and analyze the variation of the hip angles π, knee γ and ankle of the λ prosthesis. The reaction forces generated in the prosthesis were performed on the ankle of the prosthesis, performing the summation of forces on the X and Y axes. The same analysis was then applied to the tibia of the prosthesis and the socket. The reaction force of the parts of the prosthesis varies according to the hip angles π, knee γ and ankle of the prosthesis λ. Therefore, it can be deduced that the maximum forces experienced by the ankle of the prosthesis is 933.6 N on the X axis and 2.160.5 N on the Y axis. Finally, it is calculated that the maximum forces experienced by the tibia and the socket of the transtibial prosthesis in high performance competitions is 3.266 N on the X axis and 1.357 N on the Y axis. In conclusion, it can be said that the performance of the cyclist depends on several physiological factors, linked to biomechanics of training. The influence of biomechanical factors such as aerodynamics, bicycle measurements, connecting rod length, or non-circular pedaling systems on the cyclist performance.Keywords: biomechanics, dynamic model, paralympic cyclist, transtibial prosthesis
Procedia PDF Downloads 3414028 Non-Destructive Test of Bar for Determination of Critical Compression Force Directed towards the Pole
Authors: Boris Blostotsky, Elia Efraim
Abstract:
The phenomenon of buckling of structural elements under compression is revealed in many cases of loading and found consideration in many structures and mechanisms. In the present work the method and results of dynamic test for buckling of bar loaded by a compression force directed towards the pole are considered. Experimental determination of critical force for such system has not been made previously. The tested object is a bar with semi-rigid connection to the base at one of its ends, and with a hinge moving along a circle at the other. The test includes measuring the natural frequency of the bar at different values of compression load. The lateral stiffness is calculated based on natural frequency and reduced mass on the bar's movable end. The critical load is determined by extrapolation the values of lateral stiffness up to zero value. For the experimental investigation the special test-bed was created that allows the stability testing at positive and negative curvature of the movable end's trajectory, as well as varying the rotational stiffness of the other end connection. Decreasing a friction at the movable end allows extend the diapason of applied compression force. The testing method includes: - Methodology of the experiment planning, that allows determine the required number of tests under various loads values in the defined range and the type of extrapolating function; - Methodology of experimental determination of reduced mass at the bar's movable end including its own mass; - Methodology of experimental determination of lateral stiffness of uncompressed bar rotational semi-rigid connection at the base. For planning the experiment and for comparison of the experimental results with the theoretical values of critical load, the analytical dependencies of lateral stiffness of the bar with defined end conditions on compression load. In the particular case of perfectly rigid connection of the bar to the base, the critical load value corresponds to solution by S.P. Timoshenko. Correspondence of the calculated and experimental values was obtained.Keywords: non-destructive test, buckling, dynamic method, semi-rigid connections
Procedia PDF Downloads 3554027 Top-Down Influences to Multistable Perception: Evidence from Temporal Dynamics
Authors: Daria N. Podvigina, Tatiana V. Chernigovskaya
Abstract:
We have studied the temporal characteristics of bistable perception of the stimuli of two types: one involves alterations in a perceived depth and another one has an ambiguous content. We used the Necker lattice and lines of shadowed circles ambiguously perceived either as spheres or holes as stimuli of the first type. The Winson figure (the Eskimo/Indian picture) was a stimulus of the second type. We have analyzed how often the reversals occurred (reversal rate) and for how long each of the two interpretations, or percepts, was observed during one presentation (stability durations). For all three ambiguous images the reversal rate and the stability durations had similar values, which provide another evidence for a significant role of top-down processes in multistable perception.Keywords: multistable perception, perceived depth, reversal rate, top-down processes
Procedia PDF Downloads 5874026 Extraction of Urban Building Damage Using Spectral, Height and Corner Information
Authors: X. Wang
Abstract:
Timely and accurate information on urban building damage caused by earthquake is important basis for disaster assessment and emergency relief. Very high resolution (VHR) remotely sensed imagery containing abundant fine-scale information offers a large quantity of data for detecting and assessing urban building damage in the aftermath of earthquake disasters. However, the accuracy obtained using spectral features alone is comparatively low, since building damage, intact buildings and pavements are spectrally similar. Therefore, it is of great significance to detect urban building damage effectively using multi-source data. Considering that in general height or geometric structure of buildings change dramatically in the devastated areas, a novel multi-stage urban building damage detection method, using bi-temporal spectral, height and corner information, was proposed in this study. The pre-event height information was generated using stereo VHR images acquired from two different satellites, while the post-event height information was produced from airborne LiDAR data. The corner information was extracted from pre- and post-event panchromatic images. The proposed method can be summarized as follows. To reduce the classification errors caused by spectral similarity and errors in extracting height information, ground surface, shadows, and vegetation were first extracted using the post-event VHR image and height data and were masked out. Two different types of building damage were then extracted from the remaining areas: the height difference between pre- and post-event was used for detecting building damage showing significant height change; the difference in the density of corners between pre- and post-event was used for extracting building damage showing drastic change in geometric structure. The initial building damage result was generated by combining above two building damage results. Finally, a post-processing procedure was adopted to refine the obtained initial result. The proposed method was quantitatively evaluated and compared to two existing methods in Port au Prince, Haiti, which was heavily hit by an earthquake in January 2010, using pre-event GeoEye-1 image, pre-event WorldView-2 image, post-event QuickBird image and post-event LiDAR data. The results showed that the method proposed in this study significantly outperformed the two comparative methods in terms of urban building damage extraction accuracy. The proposed method provides a fast and reliable method to detect urban building collapse, which is also applicable to relevant applications.Keywords: building damage, corner, earthquake, height, very high resolution (VHR)
Procedia PDF Downloads 213