Search results for: TNT equivalent method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 19573

Search results for: TNT equivalent method

16573 Selection of Rayleigh Damping Coefficients for Seismic Response Analysis of Soil Layers

Authors: Huai-Feng Wang, Meng-Lin Lou, Ru-Lin Zhang

Abstract:

One good analysis method in seismic response analysis is direct time integration, which widely adopts Rayleigh damping. An approach is presented for selection of Rayleigh damping coefficients to be used in seismic analyses to produce a response that is consistent with Modal damping response. In the presented approach, the expression of the error of peak response, acquired through complete quadratic combination method, and Rayleigh damping coefficients was set up and then the coefficients were produced by minimizing the error. Two finite element modes of soil layers, excited by 28 seismic waves, were used to demonstrate the feasibility and validity.

Keywords: Rayleigh damping, modal damping, damping coefficients, seismic response analysis

Procedia PDF Downloads 438
16572 Graph Cuts Segmentation Approach Using a Patch-Based Similarity Measure Applied for Interactive CT Lung Image Segmentation

Authors: Aicha Majda, Abdelhamid El Hassani

Abstract:

Lung CT image segmentation is a prerequisite in lung CT image analysis. Most of the conventional methods need a post-processing to deal with the abnormal lung CT scans such as lung nodules or other lesions. The simplest similarity measure in the standard Graph Cuts Algorithm consists of directly comparing the pixel values of the two neighboring regions, which is not accurate because this kind of metrics is extremely sensitive to minor transformations such as noise or other artifacts problems. In this work, we propose an improved version of the standard graph cuts algorithm based on the Patch-Based similarity metric. The boundary penalty term in the graph cut algorithm is defined Based on Patch-Based similarity measurement instead of the simple intensity measurement in the standard method. The weights between each pixel and its neighboring pixels are Based on the obtained new term. The graph is then created using theses weights between its nodes. Finally, the segmentation is completed with the minimum cut/Max-Flow algorithm. Experimental results show that the proposed method is very accurate and efficient, and can directly provide explicit lung regions without any post-processing operations compared to the standard method.

Keywords: graph cuts, lung CT scan, lung parenchyma segmentation, patch-based similarity metric

Procedia PDF Downloads 169
16571 Monocular Visual Odometry for Three Different View Angles by Intel Realsense T265 with the Measurement of Remote

Authors: Heru Syah Putra, Aji Tri Pamungkas Nurcahyo, Chuang-Jan Chang

Abstract:

MOIL-SDK method refers to the spatial angle that forms a view with a different perspective from the Fisheye image. Visual Odometry forms a trusted application for extending projects by tracking using image sequences. A real-time, precise, and persistent approach that is able to contribute to the work when taking datasets and generate ground truth as a reference for the estimates of each image using the FAST Algorithm method in finding Keypoints that are evaluated during the tracking process with the 5-point Algorithm with RANSAC, as well as produce accurate estimates the camera trajectory for each rotational, translational movement on the X, Y, and Z axes.

Keywords: MOIL-SDK, intel realsense T265, Fisheye image, monocular visual odometry

Procedia PDF Downloads 134
16570 Theoretical Prediction of the Structural, Elastic, Electronic, Optical, and Thermal Properties of Cubic Perovskites CsXF3 (X = Ca, Sr, and Hg) under Pressure Effect

Authors: M. A. Ghebouli, A. Bouhemadou, H. Choutri, L. Louaila

Abstract:

Some physical properties of the cubic perovskites CsXF3 (X = Sr, Ca, and Hg) have been investigated using pseudopotential plane–wave (PP-PW) method based on the density functional theory (DFT). The calculated lattice constants within GGA (PBE) and LDA (CA-PZ) agree reasonably with the available experiment data. The elastic constants and their pressure derivatives are predicted using the static finite strain technique. We derived the bulk and shear moduli, Young’s modulus, Poisson’s ratio and Lamé’s constants for ideal polycrystalline aggregates. The analysis of B/G ratio indicates that CsXF3 (X = Ca, Sr, and Hg) are ductile materials. The thermal effect on the volume, bulk modulus, heat capacities CV, CP, and Debye temperature was predicted.

Keywords: perovskite, PP-PW method, elastic constants, electronic band structure

Procedia PDF Downloads 437
16569 Probabilistic Simulation of Triaxial Undrained Cyclic Behavior of Soils

Authors: Arezoo Sadrinezhad, Kallol Sett, S. I. Hariharan

Abstract:

In this paper, a probabilistic framework based on Fokker-Planck-Kolmogorov (FPK) approach has been applied to simulate triaxial cyclic constitutive behavior of uncertain soils. The framework builds upon previous work of the writers, and it has been extended for cyclic probabilistic simulation of triaxial undrained behavior of soils. von Mises elastic-perfectly plastic material model is considered. It is shown that by using probabilistic framework, some of the most important aspects of soil behavior under cyclic loading can be captured even with a simple elastic-perfectly plastic constitutive model.

Keywords: elasto-plasticity, uncertainty, soils, fokker-planck equation, fourier spectral method, finite difference method

Procedia PDF Downloads 379
16568 Extraction of Urban Building Damage Using Spectral, Height and Corner Information

Authors: X. Wang

Abstract:

Timely and accurate information on urban building damage caused by earthquake is important basis for disaster assessment and emergency relief. Very high resolution (VHR) remotely sensed imagery containing abundant fine-scale information offers a large quantity of data for detecting and assessing urban building damage in the aftermath of earthquake disasters. However, the accuracy obtained using spectral features alone is comparatively low, since building damage, intact buildings and pavements are spectrally similar. Therefore, it is of great significance to detect urban building damage effectively using multi-source data. Considering that in general height or geometric structure of buildings change dramatically in the devastated areas, a novel multi-stage urban building damage detection method, using bi-temporal spectral, height and corner information, was proposed in this study. The pre-event height information was generated using stereo VHR images acquired from two different satellites, while the post-event height information was produced from airborne LiDAR data. The corner information was extracted from pre- and post-event panchromatic images. The proposed method can be summarized as follows. To reduce the classification errors caused by spectral similarity and errors in extracting height information, ground surface, shadows, and vegetation were first extracted using the post-event VHR image and height data and were masked out. Two different types of building damage were then extracted from the remaining areas: the height difference between pre- and post-event was used for detecting building damage showing significant height change; the difference in the density of corners between pre- and post-event was used for extracting building damage showing drastic change in geometric structure. The initial building damage result was generated by combining above two building damage results. Finally, a post-processing procedure was adopted to refine the obtained initial result. The proposed method was quantitatively evaluated and compared to two existing methods in Port au Prince, Haiti, which was heavily hit by an earthquake in January 2010, using pre-event GeoEye-1 image, pre-event WorldView-2 image, post-event QuickBird image and post-event LiDAR data. The results showed that the method proposed in this study significantly outperformed the two comparative methods in terms of urban building damage extraction accuracy. The proposed method provides a fast and reliable method to detect urban building collapse, which is also applicable to relevant applications.

Keywords: building damage, corner, earthquake, height, very high resolution (VHR)

Procedia PDF Downloads 213
16567 Temperature Contour Detection of Salt Ice Using Color Thermal Image Segmentation Method

Authors: Azam Fazelpour, Saeed Reza Dehghani, Vlastimil Masek, Yuri S. Muzychka

Abstract:

The study uses a novel image analysis based on thermal imaging to detect temperature contours created on salt ice surface during transient phenomena. Thermal cameras detect objects by using their emissivities and IR radiance. The ice surface temperature is not uniform during transient processes. The temperature starts to increase from the boundary of ice towards the center of that. Thermal cameras are able to report temperature changes on the ice surface at every individual moment. Various contours, which show different temperature areas, appear on the ice surface picture captured by a thermal camera. Identifying the exact boundary of these contours is valuable to facilitate ice surface temperature analysis. Image processing techniques are used to extract each contour area precisely. In this study, several pictures are recorded while the temperature is increasing throughout the ice surface. Some pictures are selected to be processed by a specific time interval. An image segmentation method is applied to images to determine the contour areas. Color thermal images are used to exploit the main information. Red, green and blue elements of color images are investigated to find the best contour boundaries. The algorithms of image enhancement and noise removal are applied to images to obtain a high contrast and clear image. A novel edge detection algorithm based on differences in the color of the pixels is established to determine contour boundaries. In this method, the edges of the contours are obtained according to properties of red, blue and green image elements. The color image elements are assessed considering their information. Useful elements proceed to process and useless elements are removed from the process to reduce the consuming time. Neighbor pixels with close intensities are assigned in one contour and differences in intensities determine boundaries. The results are then verified by conducting experimental tests. An experimental setup is performed using ice samples and a thermal camera. To observe the created ice contour by the thermal camera, the samples, which are initially at -20° C, are contacted with a warmer surface. Pictures are captured for 20 seconds. The method is applied to five images ,which are captured at the time intervals of 5 seconds. The study shows the green image element carries no useful information; therefore, the boundary detection method is applied on red and blue image elements. In this case study, the results indicate that proposed algorithm shows the boundaries more effective than other edges detection methods such as Sobel and Canny. Comparison between the contour detection in this method and temperature analysis, which states real boundaries, shows a good agreement. This color image edge detection method is applicable to other similar cases according to their image properties.

Keywords: color image processing, edge detection, ice contour boundary, salt ice, thermal image

Procedia PDF Downloads 314
16566 Nanowire Sensor Based on Novel Impedance Spectroscopy Approach

Authors: Valeriy M. Kondratev, Ekaterina A. Vyacheslavova, Talgat Shugabaev, Alexander S. Gudovskikh, Alexey D. Bolshakov

Abstract:

Modern sensorics imposes strict requirements on the biosensors characteristics, especially technological feasibility, and selectivity. There is a growing interest in the analysis of human health biological markers, which indirectly testifying the pathological processes in the body. Such markers are acids and alkalis produced by the human, in particular - ammonia and hydrochloric acid, which are found in human sweat, blood, and urine, as well as in gastric juice. Biosensors based on modern nanomaterials, especially low dimensional, can be used for this markers detection. Most classical adsorption sensors based on metal and silicon oxides are considered non-selective, because they identically change their electrical resistance (or impedance) under the action of adsorption of different target analytes. This work demonstrates a feasible frequency-resistive method of electrical impedance spectroscopy data analysis. The approach allows to obtain of selectivity in adsorption sensors of a resistive type. The method potential is demonstrated with analyzis of impedance spectra of silicon nanowires in the presence of NH3 and HCl vapors with concentrations of about 125 mmol/L (2 ppm) and water vapor. We demonstrate the possibility of unambiguous distinction of the sensory signal from NH3 and HCl adsorption. Moreover, the method is found applicable for analysis of the composition of ammonia and hydrochloric acid vapors mixture without water cross-sensitivity. Presented silicon sensor can be used to find diseases of the gastrointestinal tract by the qualitative and quantitative detection of ammonia and hydrochloric acid content in biological samples. The method of data analysis can be directly translated to other nanomaterials to analyze their applicability in the field of biosensory.

Keywords: electrical impedance spectroscopy, spectroscopy data analysis, selective adsorption sensor, nanotechnology

Procedia PDF Downloads 114
16565 Development and Total Error Concept Validation of Common Analytical Method for Quantification of All Residual Solvents Present in Amino Acids by Gas Chromatography-Head Space

Authors: A. Ramachandra Reddy, V. Murugan, Prema Kumari

Abstract:

Residual solvents in Pharmaceutical samples are monitored using gas chromatography with headspace (GC-HS). Based on current regulatory and compendial requirements, measuring the residual solvents are mandatory for all release testing of active pharmaceutical ingredients (API). Generally, isopropyl alcohol is used as the residual solvent in proline and tryptophan; methanol in cysteine monohydrate hydrochloride, glycine, methionine and serine; ethanol in glycine and lysine monohydrate; acetic acid in methionine. In order to have a single method for determining these residual solvents (isopropyl alcohol, ethanol, methanol and acetic acid) in all these 7 amino acids a sensitive and simple method was developed by using gas chromatography headspace technique with flame ionization detection. During development, no reproducibility, retention time variation and bad peak shape of acetic acid peaks were identified due to the reaction of acetic acid with the stationary phase (cyanopropyl dimethyl polysiloxane phase) of column and dissociation of acetic acid with water (if diluent) while applying temperature gradient. Therefore, dimethyl sulfoxide was used as diluent to avoid these issues. But most the methods published for acetic acid quantification by GC-HS uses derivatisation technique to protect acetic acid. As per compendia, risk-based approach was selected as appropriate to determine the degree and extent of the validation process to assure the fitness of the procedure. Therefore, Total error concept was selected to validate the analytical procedure. An accuracy profile of ±40% was selected for lower level (quantitation limit level) and for other levels ±30% with 95% confidence interval (risk profile 5%). The method was developed using DB-Waxetr column manufactured by Agilent contains 530 µm internal diameter, thickness: 2.0 µm, and length: 30 m. A constant flow of 6.0 mL/min. with constant make up mode of Helium gas was selected as a carrier gas. The present method is simple, rapid, and accurate, which is suitable for rapid analysis of isopropyl alcohol, ethanol, methanol and acetic acid in amino acids. The range of the method for isopropyl alcohol is 50ppm to 200ppm, ethanol is 50ppm to 3000ppm, methanol is 50ppm to 400ppm and acetic acid 100ppm to 400ppm, which covers the specification limits provided in European pharmacopeia. The accuracy profile and risk profile generated as part of validation were found to be satisfactory. Therefore, this method can be used for testing of residual solvents in amino acids drug substances.

Keywords: amino acid, head space, gas chromatography, total error

Procedia PDF Downloads 148
16564 Stochastic Simulation of Random Numbers Using Linear Congruential Method

Authors: Melvin Ballera, Aldrich Olivar, Mary Soriano

Abstract:

Digital computers nowadays must be able to have a utility that is capable of generating random numbers. Usually, computer-generated random numbers are not random given predefined values such as starting point and end points, making the sequence almost predictable. There are many applications of random numbers such business simulation, manufacturing, services domain, entertainment sector and other equally areas making worthwhile to design a unique method and to allow unpredictable random numbers. Applying stochastic simulation using linear congruential algorithm, it shows that as it increases the numbers of the seed and range the number randomly produced or selected by the computer becomes unique. If this implemented in an environment where random numbers are very much needed, the reliability of the random number is guaranteed.

Keywords: stochastic simulation, random numbers, linear congruential algorithm, pseudorandomness

Procedia PDF Downloads 316
16563 Environmental Life Cycle Assessment of Two Technologic Scenario of Wind Turbine Blades Composition for an Optimized Wind Turbine Design Using the Impact 2002+ Method and Using 15 Environmental Impact Indicators

Authors: A. Jarrou, A. Iranzo, C. Nana

Abstract:

The rapid development of the onshore/offshore wind industry and the continuous, strong, and long-term support from governments have made it possible to create factories specializing in the manufacture of the different parts of wind turbines, but in the literature, Life Cycle Assessment (LCA) analyzes consider the wind turbine as a whole and do not allow the allocation of impacts to the different components of the wind turbine. Here we propose to treat each part of the wind turbine as a system in its own right. This is more in line with the current production system. Environmental Life Cycle Assessment of two technological scenarios of wind turbine blades composition for an optimized wind turbine design using the impact 2002+ method and using 15 environmental impact indicators. This article aims to assess the environmental impacts associated with 1 kg of wind turbine blades. In order to carry out a realistic and precise study, the different stages of the life cycle of a wind turbine installation are included in the study (manufacture, installation, use, maintenance, dismantling, and waste treatment). The Impact 2002+ method used makes it possible to assess 15 impact indicators (human toxicity, terrestrial and aquatic ecotoxicity, climate change, land use, etc.). Finally, a sensitivity study is carried out to analyze the different types of uncertainties in the data collected.

Keywords: life cycle assessment, wind turbine, turbine blade, environmental impact

Procedia PDF Downloads 178
16562 A Novel PSO Based Decision Tree Classification

Authors: Ali Farzan

Abstract:

Classification of data objects or patterns is a major part in most of Decision making systems. One of the popular and commonly used classification methods is Decision Tree (DT). It is a hierarchical decision making system by which a binary tree is constructed and starting from root, at each node some of the classes is rejected until reaching the leaf nods. Each leaf node is a representative of one specific class. Finding the splitting criteria in each node for constructing or training the tree is a major problem. Particle Swarm Optimization (PSO) has been adopted as a metaheuristic searching method for finding the best splitting criteria. Result of evaluating the proposed method over benchmark datasets indicates the higher accuracy of the new PSO based decision tree.

Keywords: decision tree, particle swarm optimization, splitting criteria, metaheuristic

Procedia PDF Downloads 406
16561 Exact Solutions of a Nonlinear Schrodinger Equation with Kerr Law Nonlinearity

Authors: Muna Alghabshi, Edmana Krishnan

Abstract:

A nonlinear Schrodinger equation has been considered for solving by mapping methods in terms of Jacobi elliptic functions (JEFs). The equation under consideration has a linear evolution term, linear and nonlinear dispersion terms, the Kerr law nonlinearity term and three terms representing the contribution of meta materials. This equation which has applications in optical fibers is found to have soliton solutions, shock wave solutions, and singular wave solutions when the modulus of the JEFs approach 1 which is the infinite period limit. The equation with special values of the parameters has also been solved using the tanh method.

Keywords: Jacobi elliptic function, mapping methods, nonlinear Schrodinger Equation, tanh method

Procedia PDF Downloads 315
16560 Aspectual Verbs in Modern Standard Arabic

Authors: Yasir Alotaibi

Abstract:

The aim of this paper is to discuss the syntactic analysis of aspectual or phasal verbs in Modern Standard Arabic (MSA). Aspectual or phasal verbs refer to a class of verbs that require a verbal complement and denote the inception, duration, termination ...etc. of a state or event. This paper will discuss two groups of aspectual verbs in MSA. The first group includes verbs such as ̆gacala, tafiqa, ?akhatha, ?ansha?a, sharaca and bada?a and these verbs are used to denote the inception of an event. The second group includes verbs such as ?awshaka, kaada and karaba and the meaning of these verbs is equivalent to be near/almost . The following examples illustrate the use of the verb bada?a ‘begin’ which is from the first group: a. saalim-un bada?a yuthaakiru. Salem-NOM begin.PFV.3SGM study.IPFV.3SGM ‘Salem began to study’ b.*saalim-un bada?a ?an yuthaakiru. Salem-NOM begin.PFV.3SGM COMP study.IPFV.3SGM ‘Salem began to study’ The example in (1a) is grammatical because the aspectual verb is used with a verbal complement that is not introduced by a complementizer. In contrast, example (1b) is not grammatical because the verbal complement is introduced by the complementizer ?an ‘that’. In contrast, the following examples illustrate the use of the verb kaada ‘be almost’ which is from the second group. However, the two examples are grammatical and this means that the verbal complement of this verb can be without (as in example (2a)) or with ( as in example (2b)) a complementizer. (2) a. saalim-un kaada yuthaakiru. Salem-NOM be.almost.PFV.3SGM study.IPFV.3SGM ‘Salem was almost to study’ b. saalim-un kaada ?an yuthaakiru. Salem-NOM be.almost.PFV.3SGM COMP study.IPFV.3SGM ‘Salem was almost to study’ The salient properties of this class of verbs are that they require a verbal complement, there is no a complementizer that can introduce the complement with the first group while it is possible with the second and the aspectual verb and the embedded verb share and agree with the same subject. To the best of knowledge, aspectual verbs in MSA are discussed in traditional grammar only and have not been studied in modern syntactic theories. This paper will consider the analysis of aspectual verbs in MSA within the Lexical Functional Grammar (LFG) framework. It will use some evidence such as modifier or negation to find out whether these verbs have PRED values and head their f-structures or they form complex predicates with their complements. If aspectual verbs show the properties of heads, then the paper will explore what kind of heads they are. In particular, they should be raising or control verbs. The paper will use some tests such as agreement, selectional restrictions...etc. to find out what kind of verbs they are.

Keywords: aspectual verbs, biclausal, monoclausal, raising

Procedia PDF Downloads 55
16559 Fast Algorithm to Determine Initial Tsunami Wave Shape at Source

Authors: Alexander P. Vazhenin, Mikhail M. Lavrentiev, Alexey A. Romanenko, Pavel V. Tatarintsev

Abstract:

One of the problems obstructing effective tsunami modelling is the lack of information about initial wave shape at source. The existing methods; geological, sea radars, satellite images, contain an important part of uncertainty. Therefore, direct measurement of tsunami waves obtained at the deep water bottom peruse recorders is also used. In this paper we propose a new method to reconstruct the initial sea surface displacement at tsunami source by the measured signal (marigram) approximation with the help of linear combination of synthetic marigrams from the selected set of unit sources, calculated in advance. This method has demonstrated good precision and very high performance. The mathematical model and results of numerical tests are here described.

Keywords: numerical tests, orthogonal decomposition, Tsunami Initial Sea Surface Displacement

Procedia PDF Downloads 469
16558 Improved Accuracy of Ratio Multiple Valuation

Authors: Julianto Agung Saputro, Jogiyanto Hartono

Abstract:

Multiple valuation is widely used by investors and practitioners but its accuracy is questionable. Multiple valuation inaccuracies are due to the unreliability of information used in valuation, inaccuracies comparison group selection, and use of individual multiple values. This study investigated the accuracy of valuation to examine factors that can increase the accuracy of the valuation of multiple ratios, that are discretionary accruals, the comparison group, and the composite of multiple valuation. These results indicate that multiple value adjustment method with discretionary accruals provides better accuracy, the industry comparator group method combined with the size and growth of companies also provide better accuracy. Composite of individual multiple valuation gives the best accuracy. If all of these factors combined, the accuracy of valuation of multiple ratios will give the best results.

Keywords: multiple, valuation, composite, accuracy

Procedia PDF Downloads 282
16557 Understanding Chances and Challenges of Family Planning: Qualitative Study in Indonesia's Banyumas District

Authors: Utsamani Cintyamena, Sandra Frans Olivia, Shita Lisyadewi, Ariane Utomo

Abstract:

Family planning is one of fundamental aspects in preventing maternal morbidity and mortality. However, the prevalence rate of Indonesia’s married women in choosing contraception is low. This study purpose to assess opportunities and challenges in family planning. Methodology: We conducted a qualitative study in Banyumas District which has huge reduction of maternal mortality rate from 2013 to 2015. Four focus group discussions and four small group discussions were conducted to assess knowledge and attitude of women in using contraceptive and their method of choice, as well as in-depth interview to four health workers and two family planning field officers as triangulation. Thematic content analysis was done manually. Results: Key themes emerge across interviews including (1) first choice of contraception is the one that they previously had, provided that they did not encountered problems with it, (2) rumor and fear of side effect affected their method of choice, (3) selection of contraceptive method was influenced by approval of husband, believes, and role model in community. Conclusion: Collaboration of health worker, family planning field officers, community, as well as support from stakeholder, must be increased to socializing family planning.

Keywords: attitude, challenge, chance, family planning, knowledge

Procedia PDF Downloads 147
16556 The Breakthrough of Sexual Cinematic Freedom in Denmark in the 1960s and 1970s

Authors: Søren Birkvad

Abstract:

This paper traces the development of sexual cinematic freedom in the wake of an epoch-making event in Danish cultural history. As the first in the world, the Danes abolished all censorship for adults in 1969, making the tiny nation of Denmark the world’s largest exporter of pornography for several years. Drawing on the insights of social and cultural history and the focus point of the National Cinema direction of Cinema Studies, this study focuses on Danish film pornography in the 1960s and 1970s in its own right (e.g., its peculiar mix of sex, popular comedy and certain ‘feminist’ agendas). More importantly, however, it covers a broader pattern, namely the culturally deep-rooted tradition of freedom of speech and sexual liberalism in Denmark. Thus, the key concept of frisind (“free mind”) in Danish cultural history took on an increasingly partisan application in the 1960s and 1970s. It became a designation for all-is-permitted hippie excess but was also embraced by dissenting movements on the left, such as feminism, which questioned whether a free mind necessarily meant free love. In all of this, Danish cinema from the 1960s and 1970s offers a remarkable source of historical insight, simultaneously reminding us of a number of acute issues of contemporary society. These issues include gendered ideas of sexuality and freedom then and now and the equivalent clash of cultures between a liberal commercial industry and the accelerating political demands of the “sexual revolution.” Finally, these issues include certain tensions between, on the one hand, a purely materialistic idea of sexual freedom – incarnated by anything from pornography to many of the taboo-breaking youth films and avant-garde films in the wake of the 1968-rebellion – and, on the other hand, growing opposition to this anti-spiritual perception of human sexuality (represented by for instance the ‘closet conservatism’ of Danish art film star Lars von Trier of nowadays). All in all, this presentation offers a reflection on ideas of sexuality and gender rooted in a unique historical moment in cinematic history.

Keywords: Danish film history, cultural history, film pornography, history of sexuality, national cinema, sexual liberalism

Procedia PDF Downloads 216
16555 Contrasting The Water Consumption Estimation Methods

Authors: Etienne Alain Feukeu, L. W. Snyman

Abstract:

Water scarcity is becoming a real issue nowadays. Most countries in the world are facing it in their own way based on their own geographical coordinate and condition. Many countries are facing a challenge of a growing water demand as a result of not only an increased population, economic growth, but also as a pressure of the population dynamic and urbanization. In view to mitigate some of this related problem, an accurate method of water estimation and future prediction, forecast is essential to guarantee not only the sufficient quantity, but also a good water distribution and management system. Beside the fact that several works have been undertaken to address this concern, there is still a considerable disparity between different methods and standard used for water prediction and estimation. Hence this work contrast and compare two well-defined and established methods from two countries (USA and South Africa) to demonstrate the inconsistency when different method and standards are used interchangeably.

Keywords: water scarcity, water estimation, water prediction, water forecast.

Procedia PDF Downloads 201
16554 A Comparison between Bèi Passives and Yóu Passives in Mandarin Chinese

Authors: Rui-heng Ray Huang

Abstract:

This study compares the syntax and semantics of two kinds of passives in Mandarin Chinese: bèi passives and yóu passives. To express a Chinese equivalent for ‘The thief was taken away by the police,’ either bèi or yóu can be used, as in Xiǎotōu bèi/yóu jǐngchá dàizǒu le. It is shown in this study that bèi passives and yóu passives differ semantically and syntactically. The semantic observations are based on the theta theory, dealing with thematic roles. On the other hand, the syntactic analysis draws heavily upon the generative grammar, looking into thematic structures. The findings of this study are as follows. First, the core semantics of bèi passives is centered on the Patient NP in the subject position. This Patient NP is essentially an Affectee, undergoing the outcome or consequence brought up by the action represented by the predicate. This may explain why in the sentence Wǒde huà bèi/*yóu tā niǔqū le ‘My words have been twisted by him/her,’ only bèi is allowed. This is because the subject NP wǒde huà ‘my words’ suffers a negative consequence. Yóu passives, in contrast, place the semantic focus on the post-yóu NP, which is not an Affectee though. Instead, it plays a role which has to take certain responsibility without being affected in a way like an Affectee. For example, in the sentence Zhèbù diànyǐng yóu/*bèi tā dānrèn dǎoyǎn ‘This film is directed by him/her,’ only the use of yóu is possible because the post-yóu NP tā ‘s/he’ refers to someone in charge, who is not an Affectee, nor is the sentence-initial NP zhèbù diànyǐng ‘this film’. When it comes to the second finding, the syntactic structures of bèi passives and yóu passives differ in that the former involve a two-place predicate while the latter a three-place predicate. The passive morpheme bèi in a case like Xiǎotōu bèi jǐngchá dàizǒu le ‘The thief was taken away by the police’ has been argued by some Chinese syntacticians to be a two-place predicate which selects an Experiencer subject and an Event complement. Under this analysis, the initial NP xiǎotōu ‘the thief’ in the above example is a base-generated subject. This study, however, proposes that yóu passives fall into a three-place unergative structure. In the sentence Xiǎotōu yóu jǐngchá dàizǒu le ‘The thief was taken away by the police,’ the initial NP xiǎotōu ‘the thief’ is a topic which serves as a Patient taken by the verb dàizǒu ‘take away.’ The subject of the sentence is assumed to be an Agent, which is in a null form and may find its reference from the discourse or world knowledge. Regarding the post-yóu NP jǐngchá ‘the police,’ its status is dual. On the one hand, it is a Patient introduced by the light verb yóu; on the other, it is an Agent assigned by the verb dàizǒu ‘take away.’ It is concluded that the findings in this study contribute to better understanding of what makes the distinction between the two kinds of Chinese passives.

Keywords: affectee, passive, patient, unergative

Procedia PDF Downloads 273
16553 Ultra-Fast Growth of ZnO Nanorods from Aqueous Solution: Technology and Applications

Authors: Bartlomiej S. Witkowski, Lukasz Wachnicki, Sylwia Gieraltowska, Rafal Pietruszka, Marek Godlewski

Abstract:

Zinc oxide is extensively studied II-VI semiconductor with a direct energy gap of about 3.37 eV at room temperature and high transparency in visible light spectral region. Due to these properties, ZnO is an attractive material for applications in photovoltaic, electronic and optoelectronic devices. ZnO nanorods, due to a well-developed surface, have potential of applications in sensor technology and photovoltaics. In this work we present a new inexpensive method of the ultra-fast growth of ZnO nanorods from the aqueous solution. This environment friendly and fully reproducible method allows growth of nanorods in few minutes time on various substrates, without any catalyst or complexing agent. Growth temperature does not exceed 50ºC and growth can be performed at atmospheric pressure. The method is characterized by simplicity and allows regulation of size of the ZnO nanorods in a large extent. Moreover the method is also very safe, it requires organic, non-toxic and low-price precursors. The growth can be performed on almost any type of substrate through the homo-nucleation as well as hetero-nucleation. Moreover, received nanorods are characterized by a very high quality - they are monocrystalline as confirmed by XRD and transmission electron microscopy. Importantly oxygen vacancies are not found in the photoluminescence measurements. First results for obtained by us ZnO nanorods in sensor applications are very promising. Resistance UV sensor, based on ZnO nanorods grown on a quartz substrates shows high sensitivity of 20 mW/m2 (2 μW/cm2) for point contacts, especially that the results are obtained for the nanorods array, not for a single nanorod. UV light (below 400 nm of wavelength) generates electron-hole pairs, which results in a removal from the surfaces of the water vapor and hydroxyl groups. This reduces the depletion layer in nanorods, and thus lowers the resistance of the structure. The so-obtained sensor works at room temperature and does not need the annealing to reset to initial state. Details of the technology and the first sensors results will be presented. The obtained ZnO nanorods are also applied in simple-architecture photovoltaic cells (efficiency over 12%) in conjunction with low-price Si substrates and high-sensitive photoresistors. Details informations about technology and applications will be presented.

Keywords: hydrothermal method, photoresistor, photovoltaic cells, ZnO nanorods

Procedia PDF Downloads 432
16552 Metropolis-Hastings Sampling Approach for High Dimensional Testing Methods of Autonomous Vehicles

Authors: Nacer Eddine Chelbi, Ayet Bagane, Annie Saleh, Claude Sauvageau, Denis Gingras

Abstract:

As recently stated by National Highway Traffic Safety Administration (NHTSA), to demonstrate the expected performance of a highly automated vehicles system, test approaches should include a combination of simulation, test track, and on-road testing. In this paper, we propose a new validation method for autonomous vehicles involving on-road tests (Field Operational Tests), test track (Test Matrix) and simulation (Worst Case Scenarios). We concentrate our discussion on the simulation aspects, in particular, we extend recent work based on Importance Sampling by using a Metropolis-Hasting algorithm (MHS) to sample collected data from the Safety Pilot Model Deployment (SPMD) in lane-change scenarios. Our proposed MH sampling method will be compared to the Importance Sampling method, which does not perform well in high-dimensional problems. The importance of this study is to obtain a sampler that could be applied to high dimensional simulation problems in order to reduce and optimize the number of test scenarios that are necessary for validation and certification of autonomous vehicles.

Keywords: automated driving, autonomous emergency braking (AEB), autonomous vehicles, certification, evaluation, importance sampling, metropolis-hastings sampling, tests

Procedia PDF Downloads 289
16551 Particle Filter Supported with the Neural Network for Aircraft Tracking Based on Kernel and Active Contour

Authors: Mohammad Izadkhah, Mojtaba Hoseini, Alireza Khalili Tehrani

Abstract:

In this paper we presented a new method for tracking flying targets in color video sequences based on contour and kernel. The aim of this work is to overcome the problem of losing target in changing light, large displacement, changing speed, and occlusion. The proposed method is made in three steps, estimate the target location by particle filter, segmentation target region using neural network and find the exact contours by greedy snake algorithm. In the proposed method we have used both region and contour information to create target candidate model and this model is dynamically updated during tracking. To avoid the accumulation of errors when updating, target region given to a perceptron neural network to separate the target from background. Then its output used for exact calculation of size and center of the target. Also it is used as the initial contour for the greedy snake algorithm to find the exact target's edge. The proposed algorithm has been tested on a database which contains a lot of challenges such as high speed and agility of aircrafts, background clutter, occlusions, camera movement, and so on. The experimental results show that the use of neural network increases the accuracy of tracking and segmentation.

Keywords: video tracking, particle filter, greedy snake, neural network

Procedia PDF Downloads 342
16550 Development of a Multi-Locus DNA Metabarcoding Method for Endangered Animal Species Identification

Authors: Meimei Shi

Abstract:

Objectives: The identification of endangered species, especially simultaneous detection of multiple species in complex samples, plays a critical role in alleged wildlife crime incidents and prevents illegal trade. This study was to develop a multi-locus DNA metabarcoding method for endangered animal species identification. Methods: Several pairs of universal primers were designed according to the mitochondria conserved gene regions. Experimental mixtures were artificially prepared by mixing well-defined species, including endangered species, e.g., forest musk, bear, tiger, pangolin, and sika deer. The artificial samples were prepared with 1-16 well-characterized species at 1% to 100% DNA concentrations. After multiplex-PCR amplification and parameter modification, the amplified products were analyzed by capillary electrophoresis and used for NGS library preparation. The DNA metabarcoding was carried out based on Illumina MiSeq amplicon sequencing. The data was processed with quality trimming, reads filtering, and OTU clustering; representative sequences were blasted using BLASTn. Results: According to the parameter modification and multiplex-PCR amplification results, five primer sets targeting COI, Cytb, 12S, and 16S, respectively, were selected as the NGS library amplification primer panel. High-throughput sequencing data analysis showed that the established multi-locus DNA metabarcoding method was sensitive and could accurately identify all species in artificial mixtures, including endangered animal species Moschus berezovskii, Ursus thibetanus, Panthera tigris, Manis pentadactyla, Cervus nippon at 1% (DNA concentration). In conclusion, the established species identification method provides technical support for customs and forensic scientists to prevent the illegal trade of endangered animals and their products.

Keywords: DNA metabarcoding, endangered animal species, mitochondria nucleic acid, multi-locus

Procedia PDF Downloads 140
16549 A New Approach to Retrofit Steel Moment Resisting Frame Structures after Mainshock

Authors: Amir H. Farivarrad, Kiarash M. Dolatshahi

Abstract:

During earthquake events, aftershocks can significantly increase the probability of collapse of buildings, especially for those with induced damages during the mainshock. In this paper, a practical approach is proposed for seismic rehabilitation of mainshock-damaged buildings that can be easily implemented within few days after the mainshock. To show the efficacy of the proposed method, a case study nine story steel moment frame building is chosen which was designed to pre-Northridge codes. The collapse fragility curve for the aftershock is presented for both the retrofitted and non-retrofitted structures. Comparison of the collapse fragility curves shows that the proposed method is indeed applicable to reduce the seismic collapse risk.

Keywords: aftershock, the collapse fragility curve, seismic rehabilitation, seismic retrofitting

Procedia PDF Downloads 433
16548 Improvement of Piezoresistive Pressure Sensor Accuracy by Means of Current Loop Circuit Using Optimal Digital Signal Processing

Authors: Peter A. L’vov, Roman S. Konovalov, Alexey A. L’vov

Abstract:

The paper presents the advanced digital modification of the conventional current loop circuit for pressure piezoelectric transducers. The optimal DSP algorithms of current loop responses by the maximum likelihood method are applied for diminishing of measurement errors. The loop circuit has some additional advantages such as the possibility to operate with any type of resistance or reactance sensors, and a considerable increase in accuracy and quality of measurements to be compared with AC bridges. The results obtained are dedicated to replace high-accuracy and expensive measuring bridges with current loop circuits.

Keywords: current loop, maximum likelihood method, optimal digital signal processing, precise pressure measurement

Procedia PDF Downloads 529
16547 Biologically Inspired Small Infrared Target Detection Using Local Contrast Mechanisms

Authors: Tian Xia, Yuan Yan Tang

Abstract:

In order to obtain higher small target detection accuracy, this paper presents an effective algorithm inspired by the local contrast mechanism. The proposed method can enhance target signal and suppress background clutter simultaneously. In the first stage, a enhanced image is obtained using the proposed Weighted Laplacian of Gaussian. In the second stage, an adaptive threshold is adopted to segment the target. Experimental results on two changeling image sequences show that the proposed method can detect the bright and dark targets simultaneously, and is not sensitive to sea-sky line of the infrared image. So it is fit for IR small infrared target detection.

Keywords: small target detection, local contrast, human vision system, Laplacian of Gaussian

Procedia PDF Downloads 469
16546 New HCI Design Process Education

Authors: Jongwan Kim

Abstract:

Human Computer Interaction (HCI) is a subject covering the study, plan, and design of interactions between humans and computers. The prevalent use of digital mobile devices is increasing the need for education and research on HCI. This work is focused on a new education method geared towards reducing errors while developing application programs that incorporate role-changing brainstorming techniques during HCI design process. The proposed method has been applied to a capstone design course in the last spring semester. Students discovered some examples about UI design improvement and their error discovering and reducing capability was promoted. An UI design improvement, PC voice control for people with disabilities as an assistive technology examplar, will be presented. The improvement of these students' design ability will be helpful to the real field work.

Keywords: HCI, design process, error reducing education, role-changing brainstorming, assistive technology

Procedia PDF Downloads 490
16545 Cyclic Etching Process Using Inductively Coupled Plasma for Polycrystalline Diamond on AlGaN/GaN Heterostructure

Authors: Haolun Sun, Ping Wang, Mei Wu, Meng Zhang, Bin Hou, Ling Yang, Xiaohua Ma, Yue Hao

Abstract:

Gallium nitride (GaN) is an attractive material for next-generation power devices. It is noted that the performance of GaN-based high electron mobility transistors (HEMTs) is always limited by the self-heating effect. In response to the problem, integrating devices with polycrystalline diamond (PCD) has been demonstrated to be an efficient way to alleviate the self-heating issue of the GaN-based HEMTs. Among all the heat-spreading schemes, using PCD to cap the epitaxial layer before the HEMTs process is one of the most effective schemes. Now, the mainstream method of fabricating the PCD-capped HEMTs is to deposit the diamond heat-spreading layer on the AlGaN surface, which is covered by a thin nucleation dielectric/passivation layer. To achieve the pattern etching of the diamond heat spreader and device preparation, we selected SiN as the hard mask for diamond etching, which was deposited by plasma-enhanced chemical vapor deposition (PECVD). The conventional diamond etching method first uses F-based etching to remove the SiN from the special window region, followed by using O₂/Ar plasma to etch the diamond. However, the results of the scanning electron microscope (SEM) and focused ion beam microscopy (FIB) show that there are lots of diamond pillars on the etched diamond surface. Through our study, we found that it was caused by the high roughness of the diamond surface and the existence of the overlap between the diamond grains, which makes the etching of the SiN hard mask insufficient and leaves micro-masks on the diamond surface. Thus, a cyclic etching method was proposed to solve the problem of the residual SiN, which was left in the F-based etching. We used F-based etching during the first step to remove the SiN hard mask in the specific region; then, the O₂/Ar plasma was introduced to etch the diamond in the corresponding region. These two etching steps were set as one cycle. After the first cycle, we further used cyclic etching to clear the pillars, in which the F-based etching was used to remove the residual SiN, and then the O₂/Ar plasma was used to etch the diamond. Whether to take the next cyclic etching depends on whether there are still SiN micro-masks left. By using this method, we eventually achieved the self-terminated etching of the diamond and the smooth surface after the etching. These results demonstrate that the cyclic etching method can be successfully applied to the integrated preparation of polycrystalline diamond thin films and GaN HEMTs.

Keywords: AlGaN/GaN heterojunction, O₂/Ar plasma, cyclic etching, polycrystalline diamond

Procedia PDF Downloads 134
16544 Elicitation Methods of Requirements Gathering in Shopping Mobile Application Development

Authors: Xiao Yihong, Li Zhixuan, Wong Kah Seng, Shen Xingcang

Abstract:

Requirement Elicitation is one of the important factors in developing any new application. Most systems fail just because of wrong elicitation practice. As a result, developers always choose different methods in different fields to achieve optimal results. This paper analyses four cases to understand the effectiveness of different requirement elicitation methods in the field of mobile shopping applications. The elicitation methods we studied included interviews, questionnaires, prototypes, analysis of existing systems, focus groups, brainstorming, and so on. Through the research and analysis results, we ensured the need for a mixture of elicitation methods. Meanwhile, the method adopted should be determined according to the scale of the project and be operated in a reasonable order to ensure the high efficiency of requirement elicitation.

Keywords: requirements elicitation method, shopping, mobile application, software requirement engineering

Procedia PDF Downloads 124