Search results for: spectral resolution
1741 The Concept of Community Participation and Identified Tertiary Education Problems, Strategies and Methods
Authors: Ada Adoga James
Abstract:
This paper discussed the concept of community participation and identified tertiary education problems; strategies and methods communities could be involved to reduce conflict witnessed in our tertiary institutions of learning due to government inability to fund education. The paper pointed out that community participation through the use of Parent Teachers Association (PTA), age grade, traditional leaders, village based associations, religious and political organs could be sensitized to raise financial resources. The paper identified different sources of conflicts, the outcome of which causes prolonged academic activities, destruction of lives and properties and in some cased render school environment completely insecure for serious academic activities. It recommends involvement of community participation in assisting government, proper handling of tertiary institutions in management, and more democratic procedure in conflict resolution like cordial relationship between staff, students and trade unions in decision making process.Keywords: community, conflict resolution, tertiary education, psychology, psychiatry
Procedia PDF Downloads 4811740 Rethinking the Use of Online Dispute Resolution in Resolving Cross-Border Small E-Disputes in EU
Authors: Sajedeh Salehi, Marco Giacalone
Abstract:
This paper examines the role of existing online dispute resolution (ODR) mechanisms and their effects on ameliorating access to justice – as a protected right by Art. 47 of the EU Charter of Fundamental Rights – for consumers in EU. The major focus of this study will be on evaluating ODR as the means of dispute resolution for Business-to-Consumer (B2C) cross-border small claims raised in e-commerce transactions. The authors will elaborate the consequences of implementing ODR methods in the context of recent developments in EU regulatory safeguards on promoting consumer protection. In this analysis, both non-judiciary and judiciary ODR redress mechanisms are considered, however, the significant consideration is given to – obligatory and non-obligatory – judiciary ODR methods. For that purpose, this paper will particularly investigate the impact of the EU ODR platform as well as the European Small Claims Procedure (ESCP) Regulation 861/2007 and their role on accelerating the access to justice for consumers in B2C e-disputes. Although, considerable volume of research has been carried out on ODR for consumer claims, rather less (or no-) attention has been paid to provide a combined doctrinal and empirical evaluation of ODR’s potential in resolving cross-border small e-disputes, in EU. Hence, the methodological approach taken in this study is a mixed methodology based on qualitative (interviews) and quantitative (surveys) research methods which will be mainly based on the data acquired through the findings of the Small Claims Analysis Net (SCAN) project. This project contributes towards examining the ESCP Regulation implementation and efficiency in providing consumers with a legal watershed through using the ODR for their transnational small claims. The outcomes of this research may benefit both academia and policymakers at national and international level.Keywords: access to justice, consumers, e-commerce, small e-Disputes
Procedia PDF Downloads 1281739 Brazilian Transmission System Efficient Contracting: Regulatory Impact Analysis of Economic Incentives
Authors: Thelma Maria Melo Pinheiro, Guilherme Raposo Diniz Vieira, Sidney Matos da Silva, Leonardo Mendonça de Oliveira Queiroz, Mateus Sousa Pinheiro, Danyllo Wenceslau de Oliveira Lopes
Abstract:
The present article has the objective to describe the regulatory impact analysis (RIA) of the contracting efficiency of the Brazilian transmission system usage. This contracting is made by users connected to the main transmission network and is used to guide necessary investments to supply the electrical energy demand. Therefore, an inefficient contracting of this energy amount distorts the real need for grid capacity, affecting the sector planning accuracy and resources optimization. In order to provide this efficiency, the Brazilian Electricity Regulatory Agency (ANEEL) homologated the Normative Resolution (NR) No. 666, from July 23th of 2015, which consolidated the procedures for the contracting of transmission system usage and the contracting efficiency verification. Aiming for a more efficient and rational transmission system contracting, the resolution established economic incentives denominated as Inefficiency installment for excess (IIE) and inefficiency installment for over-contracting (IIOC). The first one, IIE, is verified when the contracted demand exceeds the established regulatory limit; it is applied to consumer units, generators, and distribution companies. The second one, IIOC, is verified when the distributors over-contract their demand. Thus, the establishment of the inefficiency installments IIE and IIOC intends to avoid the agent contract less energy than necessary or more than it is needed. Knowing that RIA evaluates a regulatory intervention to verify if its goals were achieved, the results from the application of the above-mentioned normative resolution to the Brazilian transmission sector were analyzed through indicators that were created for this RIA to evaluate the contracting efficiency transmission system usage, using real data from before and after the homologation of the normative resolution in 2015. For this, indicators were used as the efficiency contracting indicator (ECI), excess of demand indicator (EDI), and over-contracting of demand indicator (ODI). The results demonstrated, through the ECI analysis, a decrease of the contracting efficiency, a behaviour that was happening even before the normative resolution of 2015. On the other side, the EDI showed a considerable decrease in the amount of excess for the distributors and a small reduction for the generators; moreover, the ODI notable decreased, which optimizes the usage of the transmission installations. Hence, with the complete evaluation from the data and indicators, it was possible to conclude that IIE is a relevant incentive for a more efficient contracting, indicating to the agents that their contracting values are not adequate to keep their service provisions for their users. The IIOC also has its relevance, to the point that it shows to the distributors that their contracting values are overestimated.Keywords: contracting, electricity regulation, evaluation, regulatory impact analysis, transmission power system
Procedia PDF Downloads 1211738 Features of Normative and Pathological Realizations of Sibilant Sounds for Computer-Aided Pronunciation Evaluation in Children
Authors: Zuzanna Miodonska, Michal Krecichwost, Pawel Badura
Abstract:
Sigmatism (lisping) is a speech disorder in which sibilant consonants are mispronounced. The diagnosis of this phenomenon is usually based on the auditory assessment. However, the progress in speech analysis techniques creates a possibility of developing computer-aided sigmatism diagnosis tools. The aim of the study is to statistically verify whether specific acoustic features of sibilant sounds may be related to pronunciation correctness. Such knowledge can be of great importance while implementing classifiers and designing novel tools for automatic sibilants pronunciation evaluation. The study covers analysis of various speech signal measures, including features proposed in the literature for the description of normative sibilants realization. Amplitudes and frequencies of three fricative formants (FF) are extracted based on local spectral maxima of the friction noise. Skewness, kurtosis, four normalized spectral moments (SM) and 13 mel-frequency cepstral coefficients (MFCC) with their 1st and 2nd derivatives (13 Delta and 13 Delta-Delta MFCC) are included in the analysis as well. The resulting feature vector contains 51 measures. The experiments are performed on the speech corpus containing words with selected sibilant sounds (/ʃ, ʒ/) pronounced by 60 preschool children with proper pronunciation or with natural pathologies. In total, 224 /ʃ/ segments and 191 /ʒ/ segments are employed in the study. The Mann-Whitney U test is employed for the analysis of stigmatism and normative pronunciation. Statistically, significant differences are obtained in most of the proposed features in children divided into these two groups at p < 0.05. All spectral moments and fricative formants appear to be distinctive between pathology and proper pronunciation. These metrics describe the friction noise characteristic for sibilants, which makes them particularly promising for the use in sibilants evaluation tools. Correspondences found between phoneme feature values and an expert evaluation of the pronunciation correctness encourage to involve speech analysis tools in diagnosis and therapy of sigmatism. Proposed feature extraction methods could be used in a computer-assisted stigmatism diagnosis or therapy systems.Keywords: computer-aided pronunciation evaluation, sigmatism diagnosis, speech signal analysis, statistical verification
Procedia PDF Downloads 3011737 Application of the Seismic Reflection Survey to an Active Fault Imaging
Authors: Nomin-Erdene Erdenetsogt, Tseedulam Khuut, Batsaikhan Tserenpil, Bayarsaikhan Enkhee
Abstract:
As the framework of 60 years of development of Astronomical and Geophysical science in modern Mongolia, various geophysical methods (electrical tomography, ground-penetrating radar, and high-resolution reflection seismic profiles) were used to image an active fault in-depth range between few decimeters to few tens meters. An active fault was fractured by an earthquake magnitude 7.6 during 1967. After geophysical investigations, trench excavations were done at the sites to expose the fault surfaces. The complex geophysical survey in the Mogod fault, Bulgan region of central Mongolia shows an interpretable reflection arrivals range of < 5 m to 50 m with the potential for increased resolution. Reflection profiles were used to help interpret the significance of neotectonic surface deformation at earthquake active fault. The interpreted profiles show a range of shallow fault structures and provide subsurface evidence with support of paleoseismologic trenching photos, electrical surveys.Keywords: Mogod fault, geophysics, seismic processing, seismic reflection survey
Procedia PDF Downloads 1271736 Describing the Fine Electronic Structure and Predicting Properties of Materials with ATOMIC MATTERS Computation System
Authors: Rafal Michalski, Jakub Zygadlo
Abstract:
We present the concept and scientific methods and algorithms of our computation system called ATOMIC MATTERS. This is the first presentation of the new computer package, that allows its user to describe physical properties of atomic localized electron systems subject to electromagnetic interactions. Our solution applies to situations where an unclosed electron 2p/3p/3d/4d/5d/4f/5f subshell interacts with an electrostatic potential of definable symmetry and external magnetic field. Our methods are based on Crystal Electric Field (CEF) approach, which takes into consideration the electrostatic ligands field as well as the magnetic Zeeman effect. The application allowed us to predict macroscopic properties of materials such as: Magnetic, spectral and calorimetric as a result of physical properties of their fine electronic structure. We emphasize the importance of symmetry of charge surroundings of atom/ion, spin-orbit interactions (spin-orbit coupling) and the use of complex number matrices in the definition of the Hamiltonian. Calculation methods, algorithms and convention recalculation tools collected in ATOMIC MATTERS were chosen to permit the prediction of magnetic and spectral properties of materials in isostructural series.Keywords: atomic matters, crystal electric field (CEF) spin-orbit coupling, localized states, electron subshell, fine electronic structure
Procedia PDF Downloads 3191735 Juridically Secure Trade Mechanisms for Alternative Dispute Resolution in Transnational Business Negotiations
Authors: Linda Frazer
Abstract:
A pluralistic methodology focuses on promoting an understanding that an alternative juridical framework for the regulation of transnational business negotiations (TBN) between private business parties is fundamentally required. This paper deals with the evolving assessment of the doctoral research of the author which demonstrated that due to insufficient juridical tools, negotiations are commonly misunderstood within the complexity of pluralistic and conflicting legal regimes. This inadequacy causes uncertainty in the enforcement of legal remedies, leaving business parties surprised. Consequently, parties cannot sufficiently anticipate when and how legal rights and obligations are created, often counting on oral or incomplete agreements which may lead to the misinterpretation of the extent of their legal rights and obligations. This uncertainty causes threats to business parties for fear of creating unintended legal obligations or, conversely, that law will not enforce intended agreements for failure to pass the tests of contractual validity. A need to find a manner to set default standards of communications and standards of conduct to monitor our evolving global trade would aid law to provide the security, predictability and foreseeability during alternative dispute resolution required by TBN parties. The conclusion of this study includes a proposal of new trade mechanisms, termed 'Bills of Negotiations' (BON) to enhance party autonomy and promote the ability for TBN parties to self-regulate within the boundaries of law. BON will be guided by a secure juridical institutionalized setting that caters to guiding communications during TBN and resolving disputes that arise along the negotiation processes on a fast track basis.Keywords: alternative resolution disputes, ADR, good faith, good faith, juridical security, legal regulation, trade mechanisms, transnational business negotiations
Procedia PDF Downloads 1431734 Heliport Remote Safeguard System Based on Real-Time Stereovision 3D Reconstruction Algorithm
Authors: Ł. Morawiński, C. Jasiński, M. Jurkiewicz, S. Bou Habib, M. Bondyra
Abstract:
With the development of optics, electronics, and computers, vision systems are increasingly used in various areas of life, science, and industry. Vision systems have a huge number of applications. They can be used in quality control, object detection, data reading, e.g., QR-code, etc. A large part of them is used for measurement purposes. Some of them make it possible to obtain a 3D reconstruction of the tested objects or measurement areas. 3D reconstruction algorithms are mostly based on creating depth maps from data that can be acquired from active or passive methods. Due to the specific appliance in airfield technology, only passive methods are applicable because of other existing systems working on the site, which can be blinded on most spectral levels. Furthermore, reconstruction is required to work long distances ranging from hundreds of meters to tens of kilometers with low loss of accuracy even with harsh conditions such as fog, rain, or snow. In response to those requirements, HRESS (Heliport REmote Safeguard System) was developed; which main part is a rotational head with a two-camera stereovision rig gathering images around the head in 360 degrees along with stereovision 3D reconstruction and point cloud combination. The sub-pixel analysis introduced in the HRESS system makes it possible to obtain an increased distance measurement resolution and accuracy of about 3% for distances over one kilometer. Ultimately, this leads to more accurate and reliable measurement data in the form of a point cloud. Moreover, the program algorithm introduces operations enabling the filtering of erroneously collected data in the point cloud. All activities from the programming, mechanical and optical side are aimed at obtaining the most accurate 3D reconstruction of the environment in the measurement area.Keywords: airfield monitoring, artificial intelligence, stereovision, 3D reconstruction
Procedia PDF Downloads 1241733 Power Iteration Clustering Based on Deflation Technique on Large Scale Graphs
Authors: Taysir Soliman
Abstract:
One of the current popular clustering techniques is Spectral Clustering (SC) because of its advantages over conventional approaches such as hierarchical clustering, k-means, etc. and other techniques as well. However, one of the disadvantages of SC is the time consuming process because it requires computing the eigenvectors. In the past to overcome this disadvantage, a number of attempts have been proposed such as the Power Iteration Clustering (PIC) technique, which is one of versions from SC; some of PIC advantages are: 1) its scalability and efficiency, 2) finding one pseudo-eigenvectors instead of computing eigenvectors, and 3) linear combination of the eigenvectors in linear time. However, its worst disadvantage is an inter-class collision problem because it used only one pseudo-eigenvectors which is not enough. Previous researchers developed Deflation-based Power Iteration Clustering (DPIC) to overcome problems of PIC technique on inter-class collision with the same efficiency of PIC. In this paper, we developed Parallel DPIC (PDPIC) to improve the time and memory complexity which is run on apache spark framework using sparse matrix. To test the performance of PDPIC, we compared it to SC, ESCG, ESCALG algorithms on four small graph benchmark datasets and nine large graph benchmark datasets, where PDPIC proved higher accuracy and better time consuming than other compared algorithms.Keywords: spectral clustering, power iteration clustering, deflation-based power iteration clustering, Apache spark, large graph
Procedia PDF Downloads 1891732 Graphene Metamaterials Supported Tunable Terahertz Fano Resonance
Authors: Xiaoyong He
Abstract:
The manipulation of THz waves is still a challenging task due to lack of natural materials interacted with it strongly. Designed by tailoring the characters of unit cells (meta-molecules), the advance of metamaterials (MMs) may solve this problem. However, because of Ohmic and radiation losses, the performance of MMs devices is subjected to the dissipation and low quality factor (Q-factor). This dilemma may be circumvented by Fano resonance, which arises from the destructive interference between a bright continuum mode and dark discrete mode (or a narrow resonance). Different from symmetric Lorentz spectral curve, Fano resonance indicates a distinct asymmetric line-shape, ultrahigh quality factor, steep variations in spectrum curves. Fano resonance is usually realized through symmetry breaking. However, if concentric double rings (DR) are placed closely to each other, the near-field coupling between them gives rise to two hybridized modes (bright and narrowband dark modes) because of the local asymmetry, resulting into the characteristic Fano line shape. Furthermore, from the practical viewpoint, it is highly desirable requirement that to achieve the modulation of Fano spectral curves conveniently, which is an important and interesting research topics. For current Fano systems, the tunable spectral curves can be realized by adjusting the geometrical structural parameters or magnetic fields biased the ferrite-based structure. But due to limited dispersion properties of active materials, it is still a tough work to tailor Fano resonance conveniently with the fixed structural parameters. With the favorable properties of extreme confinement and high tunability, graphene is a strong candidate to achieve this goal. The DR-structure possesses the excitation of so-called “trapped modes,” with the merits of simple structure and high quality of resonances in thin structures. By depositing graphene circular DR on the SiO2/Si/ polymer substrate, the tunable Fano resonance has been theoretically investigated in the terahertz regime, including the effects of graphene Fermi level, structural parameters and operation frequency. The results manifest that the obvious Fano peak can be efficiently modulated because of the strong coupling between incident waves and graphene ribbons. As Fermi level increases, the peak amplitude of Fano curve increases, and the resonant peak position shifts to high frequency. The amplitude modulation depth of Fano curves is about 30% if Fermi level changes in the scope of 0.1-1.0 eV. The optimum gap distance between DR is about 8-12 μm, where the value of figure of merit shows a peak. As the graphene ribbon width increases, the Fano spectral curves become broad, and the resonant peak denotes blue shift. The results are very helpful to develop novel graphene plasmonic devices, e.g. sensors and modulators.Keywords: graphene, metamaterials, terahertz, tunable
Procedia PDF Downloads 3441731 Probing Neuron Mechanics with a Micropipette Force Sensor
Authors: Madeleine Anthonisen, M. Hussain Sangji, G. Monserratt Lopez-Ayon, Margaret Magdesian, Peter Grutter
Abstract:
Advances in micromanipulation techniques and real-time particle tracking with nanometer resolution have enabled biological force measurements at scales relevant to neuron mechanics. An approach to precisely control and maneuver neurite-tethered polystyrene beads is presented. Analogous to an Atomic Force Microscope (AFM), this multi-purpose platform is a force sensor with imaging acquisition and manipulation capabilities. A mechanical probe composed of a micropipette with its tip fixed to a functionalized bead is used to incite the formation of a neurite in a sample of rat hippocampal neurons while simultaneously measuring the tension in said neurite as the sample is pulled away from the beaded tip. With optical imaging methods, a force resolution of 12 pN is achieved. Moreover, the advantages of this technique over alternatives such as AFM, namely ease of manipulation which ultimately allows higher throughput investigation of the mechanical properties of neurons, is demonstrated.Keywords: axonal growth, axonal guidance, force probe, pipette micromanipulation, neurite tension, neuron mechanics
Procedia PDF Downloads 3671730 The Impact of Trait and Mathematical Anxiety on Oscillatory Brain Activity during Lexical and Numerical Error-Recognition Tasks
Authors: Alexander N. Savostyanov, Tatyana A. Dolgorukova, Elena A. Esipenko, Mikhail S. Zaleshin, Margherita Malanchini, Anna V. Budakova, Alexander E. Saprygin, Yulia V. Kovas
Abstract:
The present study compared spectral-power indexes and cortical topography of brain activity in a sample characterized by different levels of trait and mathematical anxiety. 52 healthy Russian-speakers (age 17-32; 30 males) participated in the study. Participants solved an error recognition task under 3 conditions: A lexical condition (simple sentences in Russian), and two numerical conditions (simple arithmetic and complicated algebraic problems). Trait and mathematical anxiety were measured using self-repot questionnaires. EEG activity was recorded simultaneously during task execution. Event-related spectral perturbations (ERSP) were used to analyze spectral-power changes in brain activity. Additionally, sLORETA was applied in order to localize the sources of brain activity. When exploring EEG activity recorded after tasks onset during lexical conditions, sLORETA revealed increased activation in frontal and left temporal cortical areas, mainly in the alpha/beta frequency ranges. When examining the EEG activity recorded after task onset during arithmetic and algebraic conditions, additional activation in delta/theta band in the right parietal cortex was observed. The ERSP plots reveled alpha/beta desynchronizations within a 500-3000 ms interval after task onset and slow-wave synchronization within an interval of 150-350 ms. Amplitudes of these intervals reflected the accuracy of error recognition, and were differently associated with the three (lexical, arithmetic and algebraic) conditions. The level of trait anxiety was positively correlated with the amplitude of alpha/beta desynchronization. The level of mathematical anxiety was negatively correlated with the amplitude of theta synchronization and of alpha/beta desynchronization. Overall, trait anxiety was related with an increase in brain activation during task execution, whereas mathematical anxiety was associated with increased inhibitory-related activity. We gratefully acknowledge the support from the №11.G34.31.0043 grant from the Government of the Russian Federation.Keywords: anxiety, EEG, lexical and numerical error-recognition tasks, alpha/beta desynchronization
Procedia PDF Downloads 5251729 Small Text Extraction from Documents and Chart Images
Authors: Rominkumar Busa, Shahira K. C., Lijiya A.
Abstract:
Text recognition is an important area in computer vision which deals with detecting and recognising text from an image. The Optical Character Recognition (OCR) is a saturated area these days and with very good text recognition accuracy. However the same OCR methods when applied on text with small font sizes like the text data of chart images, the recognition rate is less than 30%. In this work, aims to extract small text in images using the deep learning model, CRNN with CTC loss. The text recognition accuracy is found to improve by applying image enhancement by super resolution prior to CRNN model. We also observe the text recognition rate further increases by 18% by applying the proposed method, which involves super resolution and character segmentation followed by CRNN with CTC loss. The efficiency of the proposed method shows that further pre-processing on chart image text and other small text images will improve the accuracy further, thereby helping text extraction from chart images.Keywords: small text extraction, OCR, scene text recognition, CRNN
Procedia PDF Downloads 1251728 Land Use/Land Cover Mapping Using Landsat 8 and Sentinel-2 in a Mediterranean Landscape
Authors: Moschos Vogiatzis, K. Perakis
Abstract:
Spatial-explicit and up-to-date land use/land cover information is fundamental for spatial planning, land management, sustainable development, and sound decision-making. In the last decade, many satellite-derived land cover products at different spatial, spectral, and temporal resolutions have been developed, such as the European Copernicus Land Cover product. However, more efficient and detailed information for land use/land cover is required at the regional or local scale. A typical Mediterranean basin with a complex landscape comprised of various forest types, crops, artificial surfaces, and wetlands was selected to test and develop our approach. In this study, we investigate the improvement of Copernicus Land Cover product (CLC2018) using Landsat 8 and Sentinel-2 pixel-based classification based on all available existing geospatial data (Forest Maps, LPIS, Natura2000 habitats, cadastral parcels, etc.). We examined and compared the performance of the Random Forest classifier for land use/land cover mapping. In total, 10 land use/land cover categories were recognized in Landsat 8 and 11 in Sentinel-2A. A comparison of the overall classification accuracies for 2018 shows that Landsat 8 classification accuracy was slightly higher than Sentinel-2A (82,99% vs. 80,30%). We concluded that the main land use/land cover types of CLC2018, even within a heterogeneous area, can be successfully mapped and updated according to CLC nomenclature. Future research should be oriented toward integrating spatiotemporal information from seasonal bands and spectral indexes in the classification process.Keywords: classification, land use/land cover, mapping, random forest
Procedia PDF Downloads 1261727 Rapid Building Detection in Population-Dense Regions with Overfitted Machine Learning Models
Authors: V. Mantey, N. Findlay, I. Maddox
Abstract:
The quality and quantity of global satellite data have been increasing exponentially in recent years as spaceborne systems become more affordable and the sensors themselves become more sophisticated. This is a valuable resource for many applications, including disaster management and relief. However, while more information can be valuable, the volume of data available is impossible to manually examine. Therefore, the question becomes how to extract as much information as possible from the data with limited manpower. Buildings are a key feature of interest in satellite imagery with applications including telecommunications, population models, and disaster relief. Machine learning tools are fast becoming one of the key resources to solve this problem, and models have been developed to detect buildings in optical satellite imagery. However, by and large, most models focus on affluent regions where buildings are generally larger and constructed further apart. This work is focused on the more difficult problem of detection in populated regions. The primary challenge with detecting small buildings in densely populated regions is both the spatial and spectral resolution of the optical sensor. Densely packed buildings with similar construction materials will be difficult to separate due to a similarity in color and because the physical separation between structures is either non-existent or smaller than the spatial resolution. This study finds that training models until they are overfitting the input sample can perform better in these areas than a more robust, generalized model. An overfitted model takes less time to fine-tune from a generalized pre-trained model and requires fewer input data. The model developed for this study has also been fine-tuned using existing, open-source, building vector datasets. This is particularly valuable in the context of disaster relief, where information is required in a very short time span. Leveraging existing datasets means that little to no manpower or time is required to collect data in the region of interest. The training period itself is also shorter for smaller datasets. Requiring less data means that only a few quality areas are necessary, and so any weaknesses or underpopulated regions in the data can be skipped over in favor of areas with higher quality vectors. In this study, a landcover classification model was developed in conjunction with the building detection tool to provide a secondary source to quality check the detected buildings. This has greatly reduced the false positive rate. The proposed methodologies have been implemented and integrated into a configurable production environment and have been employed for a number of large-scale commercial projects, including continent-wide DEM production, where the extracted building footprints are being used to enhance digital elevation models. Overfitted machine learning models are often considered too specific to have any predictive capacity. However, this study demonstrates that, in cases where input data is scarce, overfitted models can be judiciously applied to solve time-sensitive problems.Keywords: building detection, disaster relief, mask-RCNN, satellite mapping
Procedia PDF Downloads 1691726 Effect of Noise Reduction Algorithms on Temporal Splitting of Speech Signal to Improve Speech Perception for Binaural Hearing Aids
Authors: Rajani S. Pujar, Pandurangarao N. Kulkarni
Abstract:
Increased temporal masking affects the speech perception in persons with sensorineural hearing impairment especially under adverse listening conditions. This paper presents a cascaded scheme, which employs a noise reduction algorithm as well as temporal splitting of the speech signal. Earlier investigations have shown that by splitting the speech temporally and presenting alternate segments to the two ears help in reducing the effect of temporal masking. In this technique, the speech signal is processed by two fading functions, complementary to each other, and presented to left and right ears for binaural dichotic presentation. In the present study, half cosine signal is used as a fading function with crossover gain of 6 dB for the perceptual balance of loudness. Temporal splitting is combined with noise reduction algorithm to improve speech perception in the background noise. Two noise reduction schemes, namely spectral subtraction and Wiener filter are used. Listening tests were conducted on six normal-hearing subjects, with sensorineural loss simulated by adding broadband noise to the speech signal at different signal-to-noise ratios (∞, 3, 0, and -3 dB). Objective evaluation using PESQ was also carried out. The MOS score for VCV syllable /asha/ for SNR values of ∞, 3, 0, and -3 dB were 5, 4.46, 4.4 and 4.05 respectively, while the corresponding MOS scores for unprocessed speech were 5, 1.2, 0.9 and 0.65, indicating significant improvement in the perceived speech quality for the proposed scheme compared to the unprocessed speech.Keywords: MOS, PESQ, spectral subtraction, temporal splitting, wiener filter
Procedia PDF Downloads 3271725 Digital Joint Equivalent Channel Hybrid Precoding for Millimeterwave Massive Multiple Input Multiple Output Systems
Authors: Linyu Wang, Mingjun Zhu, Jianhong Xiang, Hanyu Jiang
Abstract:
Aiming at the problem that the spectral efficiency of hybrid precoding (HP) is too low in the current millimeter wave (mmWave) massive multiple input multiple output (MIMO) system, this paper proposes a digital joint equivalent channel hybrid precoding algorithm, which is based on the introduction of digital encoding matrix iteration. First, the objective function is expanded to obtain the relation equation, and the pseudo-inverse iterative function of the analog encoder is derived by using the pseudo-inverse method, which solves the problem of greatly increasing the amount of computation caused by the lack of rank of the digital encoding matrix and reduces the overall complexity of hybrid precoding. Secondly, the analog coding matrix and the millimeter-wave sparse channel matrix are combined into an equivalent channel, and then the equivalent channel is subjected to Singular Value Decomposition (SVD) to obtain a digital coding matrix, and then the derived pseudo-inverse iterative function is used to iteratively regenerate the simulated encoding matrix. The simulation results show that the proposed algorithm improves the system spectral efficiency by 10~20%compared with other algorithms and the stability is also improved.Keywords: mmWave, massive MIMO, hybrid precoding, singular value decompositing, equivalent channel
Procedia PDF Downloads 961724 Influence of Geologic and Geotechnical Dataset Resolution on Regional Liquefaction Assessment of the Lower Wairau Plains
Authors: Omer Altaf, Liam Wotherspoon, Rolando Orense
Abstract:
The Wairau Plains are located in the northeast of the South Island of New Zealand, with alluvial deposits of fine-grained silts and sands combined with low-lying topography suggesting the presence of liquefiable deposits over significant portions of the region. Liquefaction manifestations were observed in past earthquakes, including the 1848 Marlborough and 1855 Wairarapa earthquakes, and more recently during the 2013 Lake Grassmere and 2016 Kaikōura earthquakes. Therefore, a good understanding of the deposits that may be susceptible to liquefaction is important for land use planning in the region and to allow developers and asset owners to appropriately address their risk. For this purpose, multiple approaches have been employed to develop regional-scale maps showing the liquefaction vulnerability categories for the region. After applying semi-qualitative criteria linked to geologic age and deposit type, the higher resolution surface mapping of geomorphologic characteristics encompassing the Wairau River and the Opaoa River was used for screening. A detailed basin geologic model developed for groundwater modelling was analysed to provide a higher level of resolution than the surface-geology based classification. This is used to identify the thickness of near-surface gravel deposits, providing an improved understanding of the presence or lack of potentially non-liquefiable crust deposits. This paper describes the methodology adopted for this project and focuses on the influence of geomorphic characteristics and analysis of the detailed geologic basin model on the liquefaction classification of the Lower Wairau Plains.Keywords: liquefaction, earthquake, cone penetration test, mapping, liquefaction-induced damage
Procedia PDF Downloads 1761723 External Noise Distillation in Quantum Holography with Undetected Light
Authors: Sebastian Töpfer, Jorge Fuenzalida, Marta Gilaberte Basset, Juan P. Torres, Markus Gräfe
Abstract:
This work presents an experimental and theoretical study about the noise resilience of quantum holography with undetected photons. Quantum imaging has become an important research topic in the recent years after its first publication in 2014. Following this research, advances towards different spectral ranges in detection and different optical geometries have been made. Especially an interest in the field of near infrared to mid infrared measurements has developed, because of the unique characteristic, that allows to sample a probe with photons in a different wavelength than the photons arriving at the detector. This promising effect can be used for medical applications, to measure in the so-called molecule fingerprint region, while using broadly available detectors for the visible spectral range. Further advance the development of quantum imaging methods have been made by new measurement and detection schemes. One of which is quantum holography with undetected light. It combines digital phase shifting holography with quantum imaging to extent the obtainable sample information, by measuring not only the object transmission, but also its influence on the phase shift experienced by the transmitted light. This work will present extended research for the quantum holography with undetected light scheme regarding the influence of external noise. It is shown experimentally and theoretically that the samples information can still be at noise levels of 250 times higher than the signal level, because of its information being transmitted by the interferometric pattern. A detailed theoretic explanation is also provided.Keywords: distillation, quantum holography, quantum imaging, quantum metrology
Procedia PDF Downloads 751722 Immature Palm Tree Detection Using Morphological Filter for Palm Counting with High Resolution Satellite Image
Authors: Nur Nadhirah Rusyda Rosnan, Nursuhaili Najwa Masrol, Nurul Fatiha MD Nor, Mohammad Zafrullah Mohammad Salim, Sim Choon Cheak
Abstract:
Accurate inventories of oil palm planted areas are crucial for plantation management as this would impact the overall economy and production of oil. One of the technological advancements in the oil palm industry is semi-automated palm counting, which is replacing conventional manual palm counting via digitizing aerial imagery. Most of the semi-automated palm counting method that has been developed was limited to mature palms due to their ideal canopy size represented by satellite image. Therefore, immature palms were often left out since the size of the canopy is barely visible from satellite images. In this paper, an approach using a morphological filter and high-resolution satellite image is proposed to detect immature palm trees. This approach makes it possible to count the number of immature oil palm trees. The method begins with an erosion filter with an appropriate window size of 3m onto the high-resolution satellite image. The eroded image was further segmented using watershed segmentation to delineate immature palm tree regions. Then, local minimum detection was used because it is hypothesized that immature oil palm trees are located at the local minimum within an oil palm field setting in a grayscale image. The detection points generated from the local minimum are displaced to the center of the immature oil palm region and thinned. Only one detection point is left that represents a tree. The performance of the proposed method was evaluated on three subsets with slopes ranging from 0 to 20° and different planting designs, i.e., straight and terrace. The proposed method was able to achieve up to more than 90% accuracy when compared with the ground truth, with an overall F-measure score of up to 0.91.Keywords: immature palm count, oil palm, precision agriculture, remote sensing
Procedia PDF Downloads 761721 Investigation of Martensitic Transformation Zone at the Crack Tip of NiTi under Mode-I Loading Using Microscopic Image Correlation
Authors: Nima Shafaghi, Gunay Anlaş, C. Can Aydiner
Abstract:
A realistic understanding of martensitic phase transition under complex stress states is key for accurately describing the mechanical behavior of shape memory alloys (SMAs). Particularly regarding the sharply changing stress fields at the tip of a crack, the size, nature and shape of transformed zones are of great interest. There is significant variation among various analytical models in their predictions of the size and shape of the transformation zone. As the fully transformed region remains inside a very small boundary at the tip of the crack, experimental validation requires microscopic resolution. Here, the crack tip vicinity of NiTi compact tension specimen has been monitored in situ with microscopic image correlation with 20x magnification. With nominal 15 micrometer grains and 0.2 micrometer per pixel optical resolution, the strains at the crack tip are mapped with intra-grain detail. The transformation regions are then deduced using an equivalent strain formulation.Keywords: digital image correlation, fracture, martensitic phase transition, mode I, NiTi, transformation zone
Procedia PDF Downloads 3531720 Initial Dip: An Early Indicator of Neural Activity in Functional Near Infrared Spectroscopy Waveform
Authors: Mannan Malik Muhammad Naeem, Jeong Myung Yung
Abstract:
Functional near infrared spectroscopy (fNIRS) has a favorable position in non-invasive brain imaging techniques. The concentration change of oxygenated hemoglobin and de-oxygenated hemoglobin during particular cognitive activity is the basis for this neuro-imaging modality. Two wavelengths of near-infrared light can be used with modified Beer-Lambert law to explain the indirect status of neuronal activity inside brain. The temporal resolution of fNIRS is very good for real-time brain computer-interface applications. The portability, low cost and an acceptable temporal resolution of fNIRS put it on a better position in neuro-imaging modalities. In this study, an optimization model for impulse response function has been used to estimate/predict initial dip using fNIRS data. In addition, the activity strength parameter related to motor based cognitive task has been analyzed. We found an initial dip that remains around 200-300 millisecond and better localize neural activity.Keywords: fNIRS, brain-computer interface, optimization algorithm, adaptive signal processing
Procedia PDF Downloads 2261719 Relative Clause Attachment Ambiguity Resolution in L2: the Role of Semantics
Authors: Hamideh Marefat, Eskandar Samadi
Abstract:
This study examined the effect of semantics on processing ambiguous sentences containing Relative Clauses (RCs) preceded by a complex Determiner Phrase (DP) by Persian-speaking learners of L2 English with different proficiency and Working Memory Capacities (WMCs). The semantic relationship studied was one between the subject of the main clause and one of the DPs in the complex DP to see if, as predicted by Spreading Activation Model, priming one of the DPs through this semantic manipulation affects the L2ers’ preference. The results of a task using Rapid Serial Visual Processing (time-controlled paradigm) showed that manipulation of the relationship between the subject of the main clause and one of the DPs in the complex DP preceding RC has no effect on the choice of the antecedent; rather, the L2ers' processing is guided by the phrase structure information. Moreover, while proficiency did not have any effect on the participants’ preferences, WMC brought about a difference in their preferences, with a DP1 preference by those with a low WMC. This finding supports the chunking hypothesis and the predicate proximity principle, which is the strategy also used by monolingual Persian speakers.Keywords: semantics, relative clause processing, ambiguity resolution, proficiency, working memory capacity
Procedia PDF Downloads 6231718 Comparison between High Resolution Ultrasonography and Magnetic Resonance Imaging in Assessment of Musculoskeletal Disorders Causing Ankle Pain
Authors: Engy S. El-Kayal, Mohamed M. S. Arafa
Abstract:
There are various causes of ankle pain including traumatic and non-traumatic causes. Various imaging techniques are available for assessment of AP. MRI is considered to be the imaging modality of choice for ankle joint evaluation with an advantage of its high spatial resolution, multiplanar capability, hence its ability to visualize small complex anatomical structures around the ankle. However, the high costs and the relatively limited availability of MRI systems, as well as the relatively long duration of the examination all are considered disadvantages of MRI examination. Therefore there is a need for a more rapid and less expensive examination modality with good diagnostic accuracy to fulfill this gap. HRU has become increasingly important in the assessment of ankle disorders, with advantages of being fast, reliable, of low cost and readily available. US can visualize detailed anatomical structures and assess tendinous and ligamentous integrity. The aim of this study was to compare the diagnostic accuracy of HRU with MRI in the assessment of patients with AP. We included forty patients complaining of AP. All patients were subjected to real-time HRU and MRI of the affected ankle. Results of both techniques were compared to surgical and arthroscopic findings. All patients were examined according to a defined protocol that includes imaging the tendon tears or tendinitis, muscle tears, masses, or fluid collection, ligament sprain or tears, inflammation or fluid effusion within the joint or bursa, bone and cartilage lesions, erosions and osteophytes. Analysis of the results showed that the mean age of patients was 38 years. The study comprised of 24 women (60%) and 16 men (40%). The accuracy of HRU in detecting causes of AP was 85%, while the accuracy of MRI in the detection of causes of AP was 87.5%. In conclusions: HRU and MRI are two complementary tools of investigation with the former will be used as a primary tool of investigation and the latter will be used to confirm the diagnosis and the extent of the lesion especially when surgical interference is planned.Keywords: ankle pain (AP), high-resolution ultrasound (HRU), magnetic resonance imaging (MRI) ultrasonography (US)
Procedia PDF Downloads 1901717 Numerical Simulation of Air Pollutant Using Coupled AERMOD-WRF Modeling System over Visakhapatnam: A Case Study
Authors: Amit Kumar
Abstract:
Accurate identification of deteriorated air quality regions is very helpful in devising better environmental practices and mitigation efforts. In the present study, an attempt has been made to identify the air pollutant dispersion patterns especially NOX due to vehicular and industrial sources over a rapidly developing urban city, Visakhapatnam (17°42’ N, 83°20’ E), India, during April 2009. Using the emission factors of different vehicles as well as the industry, a high resolution 1 km x 1 km gridded emission inventory has been developed for Visakhapatnam city. A dispersion model AERMOD with explicit representation of planetary boundary layer (PBL) dynamics and offline coupled through a developed coupler mechanism with a high resolution mesoscale model WRF-ARW resolution for simulating the dispersion patterns of NOX is used in the work. The meteorological as well as PBL parameters obtained by employing two PBL schemes viz., non-local Yonsei University (YSU) and local Mellor-Yamada-Janjic (MYJ) of WRF-ARW model, which are reasonably representing the boundary layer parameters are considered for integrating AERMOD. Significantly different dispersion patterns of NOX have been noticed between summer and winter months. The simulated NOX concentration is validated with available six monitoring stations of Central Pollution Control Board, India. Statistical analysis of model evaluated concentrations with the observations reveals that WRF-ARW of YSU scheme with AERMOD has shown better performance. The deteriorated air quality locations are identified over Visakhapatnam based on the validated model simulations of NOX concentrations. The present study advocates the utility of tNumerical Simulation of Air Pollutant Using Coupled AERMOD-WRF Modeling System over Visakhapatnam: A Case Studyhe developed gridded emission inventory of NOX with coupled WRF-AERMOD modeling system for air quality assessment over the study region.Keywords: WRF-ARW, AERMOD, planetary boundary layer, air quality
Procedia PDF Downloads 2801716 Heritage 3D Digitalization Combining High Definition Photogrammetry with Metrologic Grade Laser Scans
Authors: Sebastian Oportus, Fabrizio Alvarez
Abstract:
3D digitalization of heritage objects is widely used nowadays. However, the most advanced 3D scanners in the market that capture topology and texture at the same time, and are specifically made for this purpose, don’t deliver the accuracy that is needed for scientific research. In the last three years, we have developed a method that combines the use of Metrologic grade laser scans, that allows us to work with a high accuracy topology up to 15 times more precise and combine this mesh with a texture obtained from high definition photogrammetry with up to 100 times more pixel concentrations. The result is an accurate digitalization that promotes heritage preservation, scientific study, high detail reproduction, and digital restoration, among others. In Chile, we have already performed 478 digitalizations of high-value heritage pieces and compared the results with up to five different digitalization methods; the results obtained show a considerable better dimensional accuracy and texture resolution. We know the importance of high precision and resolution for academics and museology; that’s why our proposal is to set a worldwide standard using this open source methodology.Keywords: 3D digitalization, digital heritage, heritage preservation, digital restauration, heritage reproduction
Procedia PDF Downloads 1881715 Dynamic Thin Film Morphology near the Contact Line of a Condensing Droplet: Nanoscale Resolution
Authors: Abbasali Abouei Mehrizi, Hao Wang
Abstract:
The thin film region is so important in heat transfer process due to its low thermal resistance. On the other hand, the dynamic contact angle is crucial boundary condition in numerical simulations. While different modeling contains different assumption of the microscopic contact angle, none of them has experimental evidence for their assumption, and the contact line movement mechanism still remains vague. The experimental investigation in complete wetting is more popular than partial wetting, especially in nanoscale resolution when there is sharp variation in thin film profile in partial wetting. In the present study, an experimental investigation of water film morphology near the triple phase contact line during the condensation is performed. The state-of-the-art tapping-mode atomic force microscopy (TM-AFM) was used to get the high-resolution film profile goes down to 2 nm from the contact line. The droplet was put in saturated chamber. The pristine silicon wafer was used as a smooth substrate. The substrate was heated by PI film heater. So the chamber would be over saturated by droplet evaporation. By turning off the heater, water vapor gradually started condensing on the droplet and the droplet advanced. The advancing speed was less than 20 nm/s. The dominant results indicate that in contrast to nonvolatile liquid, the film profile goes down straightly to the surface till 2 nm from the substrate. However, small bending has been observed below 20 nm, occasionally. So, it can be claimed that for the low condensation rate the microscopic contact angle equals to the optically detectable macroscopic contact angle. This result can be used to simplify the heat transfer modeling in partial wetting. The experimental result of the equality of microscopic and macroscopic contact angle can be used as a solid evidence for using this boundary condition in numerical simulation.Keywords: advancing, condensation, microscopic contact angle, partial wetting
Procedia PDF Downloads 2951714 Three-Dimensional Measurement and Analysis of Facial Nerve Recess
Authors: Kang Shuo-Shuo, Li Jian-Nan, Yang Shiming
Abstract:
Purpose: The three-dimensional anatomical structure of the facial nerve recess and its relationship were measured by high-resolution temporal bone CT to provide imaging reference for cochlear implant operation. Materials and Methods: By analyzing the high-resolution CT of 160 cases (320 pleural ears) of the temporal bone, the following parameters were measured at the axial window niche level: 1. The distance between the facial nerve and chordae tympani nerve d1; 2. Distance between the facial nerve and circular window niche d2; 3. The relative Angle between the facial nerve and the circular window niche a; 4. Distance between the middle point of the face recess and the circular window niche d3; 5. The relative angle between the middle point of the face recess and the circular window niche b. Factors that might influence the anatomy of the facial recess were recorded, including the patient's sex, age, and anatomical variation (e.g., vestibular duct dilation, mastoid gas type, mothoid sinus advancement, jugular bulbar elevation, etc.), and the correlation between these factors and the measured facial recess parameters was analyzed. Result: The mean value of face-drum distance d1 is (3.92 ± 0.26) mm, the mean value of face-niche distance d2 is (5.95 ± 0.62) mm, the mean value of face-niche Angle a is (94.61 ± 9.04) °, and the mean value of fossa - niche distance d3 is (6.46 ± 0.63) mm. The average fossa-niche Angle b was (113.47 ± 7.83) °. Gender, age, and anterior sigmoid sinus were the three factors affecting the width of the opposite recess d1, the Angle of the opposite nerve relative to the circular window niche a, and the Angle of the facial recess relative to the circular window niche b. Conclusion: High-resolution temporal bone CT before cochlear implantation can show the important anatomical relationship of the facial nerve recess, and the measurement results have clinical reference value for the operation of cochlear implantation.Keywords: cochlear implantation, recess of facial nerve, temporal bone CT, three-dimensional measurement
Procedia PDF Downloads 161713 Repositioning Religion as a Catalyst for Conflict Resolution in Nigeria
Authors: Samuel A. Muyiwa
Abstract:
Religious chauvinism has attained an alarming status in Contemporary Nigerian society. Arguably, Nigeria is the largest economy and most populous nation in Africa with over 182 million people, the advantages offer by vibrant economy and high population have been sacrificed on the altar of religion. Tolerance, sacrifice, humility, compassion, love, justice, trustworthiness, dedication to the well-being of others, and unity are the universal spiritual principles that lie at the heart of any religion either Christianity or Islam even traditional. Whereas traditional religious practices foreground the beliefs, norms and ritual that are related to the sacred being God because of its quick and immediate consequence of its effect, the new-found religious sentiments have deviated from the norms, thus undermining cosmic harmony in Nigeria because of its long-time consequence of its effect. Religion, which is expected to accelerate growth and motivate people to develop spiritual nuances for the betterment of their communities, has, however occasioned conflict and violence in Nigeria socio-political cosmo. Therefore, this study examines the content of religion in the promotion of peace and unity and its contextual missing link in the promotion of conflict and violence in Nigeria.Keywords: religion chauvinism, Nigeria, conflict, conflict resolution
Procedia PDF Downloads 3181712 Spatially Downscaling Land Surface Temperature with a Non-Linear Model
Authors: Kai Liu
Abstract:
Remote sensing-derived land surface temperature (LST) can provide an indication of the temporal and spatial patterns of surface evapotranspiration (ET). However, the spatial resolution achieved by existing commonly satellite products is ~1 km, which remains too coarse for ET estimations. This paper proposed a model that can disaggregate coarse resolution MODIS LST at 1 km scale to fine spatial resolutions at the scale of 250 m. Our approach attempted to weaken the impacts of soil moisture and growing statues on LST variations. The proposed model spatially disaggregates the coarse thermal data by using a non-linear model involving Bowen ratio, normalized difference vegetation index (NDVI) and photochemical reflectance index (PRI). This LST disaggregation model was tested on two heterogeneous landscapes in central Iowa, USA and Heihe River, China, during the growing seasons. Statistical results demonstrated that our model achieved better than the two classical methods (DisTrad and TsHARP). Furthermore, using the surface energy balance model, it was observed that the estimated ETs using the disaggregated LST from our model were more accurate than those using the disaggregated LST from DisTrad and TsHARP.Keywords: Bowen ration, downscaling, evapotranspiration, land surface temperature
Procedia PDF Downloads 329