Search results for: deep hole
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2396

Search results for: deep hole

1436 Estimating Poverty Levels from Satellite Imagery: A Comparison of Human Readers and an Artificial Intelligence Model

Authors: Ola Hall, Ibrahim Wahab, Thorsteinn Rognvaldsson, Mattias Ohlsson

Abstract:

The subfield of poverty and welfare estimation that applies machine learning tools and methods on satellite imagery is a nascent but rapidly growing one. This is in part driven by the sustainable development goal, whose overarching principle is that no region is left behind. Among other things, this requires that welfare levels can be accurately and rapidly estimated at different spatial scales and resolutions. Conventional tools of household surveys and interviews do not suffice in this regard. While they are useful for gaining a longitudinal understanding of the welfare levels of populations, they do not offer adequate spatial coverage for the accuracy that is needed, nor are their implementation sufficiently swift to gain an accurate insight into people and places. It is this void that satellite imagery fills. Previously, this was near-impossible to implement due to the sheer volume of data that needed processing. Recent advances in machine learning, especially the deep learning subtype, such as deep neural networks, have made this a rapidly growing area of scholarship. Despite their unprecedented levels of performance, such models lack transparency and explainability and thus have seen limited downstream applications as humans generally are apprehensive of techniques that are not inherently interpretable and trustworthy. While several studies have demonstrated the superhuman performance of AI models, none has directly compared the performance of such models and human readers in the domain of poverty studies. In the present study, we directly compare the performance of human readers and a DL model using different resolutions of satellite imagery to estimate the welfare levels of demographic and health survey clusters in Tanzania, using the wealth quintile ratings from the same survey as the ground truth data. The cluster-level imagery covers all 608 cluster locations, of which 428 were classified as rural. The imagery for the human readers was sourced from the Google Maps Platform at an ultra-high resolution of 0.6m per pixel at zoom level 18, while that of the machine learning model was sourced from the comparatively lower resolution Sentinel-2 10m per pixel data for the same cluster locations. Rank correlation coefficients of between 0.31 and 0.32 achieved by the human readers were much lower when compared to those attained by the machine learning model – 0.69-0.79. This superhuman performance by the model is even more significant given that it was trained on the relatively lower 10-meter resolution satellite data while the human readers estimated welfare levels from the higher 0.6m spatial resolution data from which key markers of poverty and slums – roofing and road quality – are discernible. It is important to note, however, that the human readers did not receive any training before ratings, and had this been done, their performance might have improved. The stellar performance of the model also comes with the inevitable shortfall relating to limited transparency and explainability. The findings have significant implications for attaining the objective of the current frontier of deep learning models in this domain of scholarship – eXplainable Artificial Intelligence through a collaborative rather than a comparative framework.

Keywords: poverty prediction, satellite imagery, human readers, machine learning, Tanzania

Procedia PDF Downloads 98
1435 The Role of Graphene Oxide on Titanium Dioxide Performance for Photovoltaic Applications

Authors: Abdelmajid Timoumi, Salah Alamri, Hatem Alamri

Abstract:

TiO₂ Graphene Oxide (TiO₂-GO) nanocomposite was prepared using the spin coating technique of suspension of Graphene Oxide (GO) nanosheets and Titanium Tetra Isopropoxide (TIP). The prepared nanocomposites samples were characterized by X-ray diffractometer, Scanning Electron Microscope and Atomic Force Microscope to examine their structures and morphologies. UV-vis transmittance and reflectance spectroscopy was employed to estimate band gap energies. From the TiO₂-GO samples, a 0.25 μm thin layer on a piece of glass 2x2 cm was created. The X-ray diffraction analysis revealed that the as-deposited layers are amorphous in nature. The surface morphology images demonstrate that the layers grew in distributed with some spherical/rod-like and partially agglomerated TiGO on the surface of the composite. The Atomic Force Microscopy indicated that the films are smooth with slightly larger surface roughness. The analysis of optical absorption data of the layers showed that the values of band gap energy decreased from 3.46 eV to 1.40 eV, depending on the grams of GO doping. This reduction might be attributed to electron and/or hole trapping at the donor and acceptor levels in the TiO₂ band structure. Observed results have shown that the inclusion of GO in the TiO₂ matrix have exhibited significant and excellent properties, which would be promising for application in the photovoltaic application.

Keywords: titanium dioxide, graphene oxide, thin films, solar cells

Procedia PDF Downloads 158
1434 Experimental Study on Different Load Operation and Rapid Load-change Characteristics of Pulverized Coal Combustion with Self-preheating Technology

Authors: Hongliang Ding, Ziqu Ouyang

Abstract:

Under the basic national conditions that the energy structure is dominated by coal, it is of great significance to realize deep and flexible peak shaving of boilers in pulverized coal power plants, and maximize the consumption of renewable energy in the power grid, to ensure China's energy security and scientifically achieve the goals of carbon peak and carbon neutrality. With the promising self-preheating combustion technology, which had the potential of broad-load regulation and rapid response to load changes, this study mainly investigated the different load operation and rapid load-change characteristics of pulverized coal combustion. Four effective load-stabilization bases were proposed according to preheating temperature, coal gas composition (calorific value), combustion temperature (spatial mean temperature and mean square temperature fluctuation coefficient), and flue gas emissions (CO and NOx concentrations), on the basis of which the load-change rates were calculated to assess the load response characteristics. Due to the improvement of the physicochemical properties of pulverized coal after preheating, stable ignition and combustion conditions could be obtained even at a low load of 25%, with a combustion efficiency of over 97.5%, and NOx emission reached the lowest at 50% load, with the concentration of 50.97 mg/Nm3 (@6%O2). Additionally, the load ramp-up stage displayed higher load-change rates than the load ramp-down stage, with maximum rates of 3.30 %/min and 3.01 %/min, respectively. Furthermore, the driving force formed by high step load was conducive to the increase of load-change rate. The rates based on the preheating indicator attained the highest value of 3.30 %/min, while the rates based on the combustion indicator peaked at 2.71 %/min. In comparison, the combustion indicator accurately described the system’s combustion state and load changes, whereas the preheating indicator was easier to acquire, with a higher load-change rate, hence the appropriate evaluation strategy should depend on the actual situation. This study verified a feasible method for deep and flexible peak shaving of coal-fired power units, further providing basic data and technical supports for future engineering applications.

Keywords: clean coal combustion, load-change rate, peak shaving, self-preheating

Procedia PDF Downloads 66
1433 Predictive Analysis of the Stock Price Market Trends with Deep Learning

Authors: Suraj Mehrotra

Abstract:

The stock market is a volatile, bustling marketplace that is a cornerstone of economics. It defines whether companies are successful or in spiral. A thorough understanding of it is important - many companies have whole divisions dedicated to analysis of both their stock and of rivaling companies. Linking the world of finance and artificial intelligence (AI), especially the stock market, has been a relatively recent development. Predicting how stocks will do considering all external factors and previous data has always been a human task. With the help of AI, however, machine learning models can help us make more complete predictions in financial trends. Taking a look at the stock market specifically, predicting the open, closing, high, and low prices for the next day is very hard to do. Machine learning makes this task a lot easier. A model that builds upon itself that takes in external factors as weights can predict trends far into the future. When used effectively, new doors can be opened up in the business and finance world, and companies can make better and more complete decisions. This paper explores the various techniques used in the prediction of stock prices, from traditional statistical methods to deep learning and neural networks based approaches, among other methods. It provides a detailed analysis of the techniques and also explores the challenges in predictive analysis. For the accuracy of the testing set, taking a look at four different models - linear regression, neural network, decision tree, and naïve Bayes - on the different stocks, Apple, Google, Tesla, Amazon, United Healthcare, Exxon Mobil, J.P. Morgan & Chase, and Johnson & Johnson, the naïve Bayes model and linear regression models worked best. For the testing set, the naïve Bayes model had the highest accuracy along with the linear regression model, followed by the neural network model and then the decision tree model. The training set had similar results except for the fact that the decision tree model was perfect with complete accuracy in its predictions, which makes sense. This means that the decision tree model likely overfitted the training set when used for the testing set.

Keywords: machine learning, testing set, artificial intelligence, stock analysis

Procedia PDF Downloads 90
1432 Silicon-To-Silicon Anodic Bonding via Intermediate Borosilicate Layer for Passive Flow Control Valves

Authors: Luc Conti, Dimitry Dumont-Fillon, Harald van Lintel, Eric Chappel

Abstract:

Flow control valves comprise a silicon flexible membrane that deflects against a substrate, usually made of glass, containing pillars, an outlet hole, and anti-stiction features. However, there is a strong interest in using silicon instead of glass as substrate material, as it would simplify the process flow by allowing the use of well controlled anisotropic etching. Moreover, specific devices demanding a bending of the substrate would also benefit from the inherent outstanding mechanical strength of monocrystalline silicon. Unfortunately, direct Si-Si bonding is not easily achieved with highly structured wafers since residual stress may prevent the good adhesion between wafers. Using a thermoplastic polymer, such as parylene, as intermediate layer is not well adapted to this design as the wafer-to-wafer alignment is critical. An alternative anodic bonding method using an intermediate borosilicate layer has been successfully tested. This layer has been deposited onto the silicon substrate. The bonding recipe has been adapted to account for the presence of the SOI buried oxide and intermediate glass layer in order not to exceed the breakdown voltage. Flow control valves dedicated to infusion of viscous fluids at very high pressure have been made and characterized. The results are compared to previous data obtained using the standard anodic bonding method.

Keywords: anodic bonding, evaporated glass, flow control valve, drug delivery

Procedia PDF Downloads 198
1431 Strain Softening of Soil under Cyclic Loading

Authors: Kobid Panthi, Suttisak Soralump, Suriyon Prempramote

Abstract:

In June 27, 2014 slope movement was observed in upstream side of Khlong Pa Bon Dam, Thailand. The slide did not have any major catastrophic impact on the dam structure but raised a very important question; why did the slide occur after 10 years of operation? Various site investigations (Bore Hole Test, SASW, Echo Sounding, and Geophysical Survey), laboratory analysis and numerical modelling using SIGMA/W and SLOPE/W were conducted to determine the cause of slope movement. It was observed that the dam had undergone the greatest differential drawdown in its operational history in the year 2014 and was termed as the major cause of movement. From the laboratory tests, it was found that the shear strength of clay had decreased with a period of time and was near its residual value. The cyclic movement of water, i.e., reservoir filling and emptying was coined out to be the major cause for the reduction of shear strength. The numerical analysis was carried out using a modified cam clay (MCC) model to determine the strain softening behavior of the clay. The strain accumulation was observed in the slope with each reservoir cycle triggering the slope failure in 2014. It can be inferred that if there was no major drawdown in 2014, the slope would not have failed but eventually would have failed after a long period of time. If there was no major drawdown in 2014, the slope would not have failed. However, even if there hadn’t been a drawdown, it would have failed eventually in the long run.

Keywords: slope movement, strain softening, residual strength, modified cam clay

Procedia PDF Downloads 123
1430 Polarization Effects in Cosmic-Ray Acceleration by Cyclotron Auto-Resonance

Authors: Yousef I. Salamin

Abstract:

Theoretical investigations, analytical as well as numerical, have shown that electrons can be accelerated to GeV energies by the process of cyclotron auto-resonance acceleration (CARA). In CARA, the particle would be injected along the lines of a uniform magnetic field aligned parallel to the direction of propagation of a plane-wave radiation field. Unfortunately, an accelerator based on CARA would be prohibitively too long and too expensive to build and maintain. However, the process stands a better chance of success near the polar cap of a compact object (such as a neutron star, a black hole or a magnetar) or in an environment created in the wake of a binary neutron-star or blackhole merger. Dynamics of the nuclides ₁H¹, ₂He⁴, ₂₆Fe⁵⁶, and ₂₈Ni⁶², in such astrophysical conditions, have been investigated by single-particle calculations and many-particle simulations. The investigations show that these nuclides can reach ZeV energies (1 ZeV = 10²¹ eV) due to interaction with super-intense radiation of wavelengths = 1 and 10 m and = 50 pm and magnetic fields of strengths at the mega- and giga-tesla levels. Examples employing radiation intensities in the range 10³²-10⁴² W/m² have been used. Employing a two-parameter model for representing the radiation field, CARA is analytically generalized to include any state of polarization, and the basic working equations are derived rigorously and in closed analytic form.

Keywords: compact objects, cosmic-ray acceleration, cyclotron auto-resonance, polarization effects, zevatron

Procedia PDF Downloads 118
1429 A Numerical Investigation of Total Temperature Probes Measurement Performance

Authors: Erdem Meriç

Abstract:

Measuring total temperature of air flow accurately is a very important requirement in the development phases of many industrial products, including gas turbines and rockets. Thermocouples are very practical devices to measure temperature in such cases, but in high speed and high temperature flows, the temperature of thermocouple junction may deviate considerably from real flow total temperature due to the effects of heat transfer mechanisms of convection, conduction, and radiation. To avoid errors in total temperature measurement, special probe designs which are experimentally characterized are used. In this study, a validation case which is an experimental characterization of a specific class of total temperature probes is selected from the literature to develop a numerical conjugate heat transfer analysis methodology to study the total temperature probe flow field and solid temperature distribution. Validated conjugate heat transfer methodology is used to investigate flow structures inside and around the probe and effects of probe design parameters like the ratio between inlet and outlet hole areas and prob tip geometry on measurement accuracy. Lastly, a thermal model is constructed to account for errors in total temperature measurement for a specific class of probes in different operating conditions. Outcomes of this work can guide experimentalists to design a very accurate total temperature probe and quantify the possible error for their specific case.

Keywords: conjugate heat transfer, recovery factor, thermocouples, total temperature probes

Procedia PDF Downloads 125
1428 Women's Religiosity as a Factor in the Persistence of Religious Traditions: Kazakhstan, the XX Century

Authors: G. Nadirova, B. Aktaulova

Abstract:

The main question of the research is- how did the Kazakhs manage to keep their religious thinking in the period of active propaganda of Soviet atheism, for seventy years of struggle against religion with the involvement of the scientific worldview as the primary means of proving the absence of the divine nature and materiality of the world? Our hypothesis is that In case of Kazakhstan the conservative female religious consciousness seems to have been a factor that helped to preserve the “everyday” religiousness of Kazakhs, which was far from deep theological contents of Islam, but able to revive in a short time after the decennia of proclaimed atheism.

Keywords: woman, religious thinking, Kazakhstan, soviet ideology, rituals, family

Procedia PDF Downloads 209
1427 Comparison of Extended Kalman Filter and Unscented Kalman Filter for Autonomous Orbit Determination of Lagrangian Navigation Constellation

Authors: Youtao Gao, Bingyu Jin, Tanran Zhao, Bo Xu

Abstract:

The history of satellite navigation can be dated back to the 1960s. From the U.S. Transit system and the Russian Tsikada system to the modern Global Positioning System (GPS) and the Globalnaya Navigatsionnaya Sputnikovaya Sistema (GLONASS), performance of satellite navigation has been greatly improved. Nowadays, the navigation accuracy and coverage of these existing systems have already fully fulfilled the requirement of near-Earth users, but these systems are still beyond the reach of deep space targets. Due to the renewed interest in space exploration, a novel high-precision satellite navigation system is becoming even more important. The increasing demand for such a deep space navigation system has contributed to the emergence of a variety of new constellation architectures, such as the Lunar Global Positioning System. Apart from a Walker constellation which is similar to the one adopted by GPS on Earth, a novel constellation architecture which consists of libration point satellites in the Earth-Moon system is also available to construct the lunar navigation system, which can be called accordingly, the libration point satellite navigation system. The concept of using Earth-Moon libration point satellites for lunar navigation was first proposed by Farquhar and then followed by many other researchers. Moreover, due to the special characteristics of Libration point orbits, an autonomous orbit determination technique, which is called ‘Liaison navigation’, can be adopted by the libration point satellites. Using only scalar satellite-to-satellite tracking data, both the orbits of the user and libration point satellites can be determined autonomously. In this way, the extensive Earth-based tracking measurement can be eliminated, and an autonomous satellite navigation system can be developed for future space exploration missions. The method of state estimate is an unnegligible factor which impacts on the orbit determination accuracy besides type of orbit, initial state accuracy and measurement accuracy. We apply the extended Kalman filter(EKF) and the unscented Kalman filter(UKF) to determinate the orbits of Lagrangian navigation satellites. The autonomous orbit determination errors are compared. The simulation results illustrate that UKF can improve the accuracy and z-axis convergence to some extent.

Keywords: extended Kalman filter, autonomous orbit determination, unscented Kalman filter, navigation constellation

Procedia PDF Downloads 276
1426 Surgical Treatment of Glaucoma – Literature and Video Review of Blebs, Tubes, and Micro-Invasive Glaucoma Surgeries (MIGS)

Authors: Ana Miguel

Abstract:

Purpose: Glaucoma is the second cause of worldwide blindness and the first cause of irreversible blindness. Trabeculectomy, the standard glaucoma surgery, has a success rate between 36.0% and 98.0% at three years and a high complication rate, leading to the development of different surgeries, micro-invasive glaucoma surgeries (MIGS). MIGS devices are diverse and have various indications, risks, and effectiveness. We intended to review MIGS’ surgical techniques, indications, contra-indications, and IOP effect. Methods: We performed a literature review of MIGS to differentiate the devices and their reported effectiveness compared to traditional surgery (tubes and blebs). We also conducted a video review of the last 1000 glaucoma surgeries of the author (including MIGS, but also trabeculectomy, deep sclerectomy, and tubes of Ahmed and Baerveldt) performed at glaucoma and advanced anterior segment fellowship in Canada and France, to describe preferred surgical techniques for each. Results: We present the videos with surgical techniques and pearls for each surgery. Glaucoma surgeries included: 1- bleb surgery (namely trabeculectomy, with releasable sutures or with slip knots, deep sclerectomy, Ahmed valve, Baerveldt tube), 2- MIGS with bleb, also known as MIBS (including XEN 45, XEN 63, and Preserflo), 3- MIGS increasing supra-choroidal flow (iStar), 4-MIGS increasing trabecular flow (iStent, gonioscopy-assisted transluminal trabeculotomy - GATT, goniotomy, excimer laser trabeculostomy -ELT), and 5-MIGS decreasing aqueous humor production (endocyclophotocoagulation, ECP). There was also needling (ab interno and ab externo) performed at the operating room and irido-zonulo-hyaloïdectomy (IZHV). Each technique had different indications and contra-indications. Conclusion: MIGS are valuable in glaucoma surgery, such as traditional surgery with trabeculectomy and tubes. All glaucoma surgery can be combined with phacoemulsification (there may be a synergistic effect on MIGS + cataract surgery). In addition, some MIGS may be combined for further intraocular pressure lowering effect (for example, iStents with goniotomy and ECP). A good surgical technique and postoperative management are fundamental to increasing success and good practice in all glaucoma surgery.

Keywords: glaucoma, migs, surgery, video, review

Procedia PDF Downloads 77
1425 Existential Feeling in Contemporary Chinese Novels: The Case of Yan Lianke

Authors: Thuy Hanh Nguyen Thi

Abstract:

Since 1940, existentialism has penetrated into China and continued to profoundly influence contemporary Chinese literature. By the method of deep reading and text analysis, this article analyzes the existential feeling in Yan Lianke’s novels through various aspects: the Sisyphus senses, the narrative rationalization and the viewpoint of the dead. In addition to pointing out the characteristics of the existential sensation in the writer’s novels, the analysis of the article also provides an insight into the nature and depth of contemporary Chinese society.

Keywords: Yan Lianke, existentialism, the existential feeling, contemporary Chinese literature

Procedia PDF Downloads 136
1424 Deep Q-Network for Navigation in Gazebo Simulator

Authors: Xabier Olaz Moratinos

Abstract:

Drone navigation is critical, particularly during the initial phases, such as the initial ascension, where pilots may fail due to strong external interferences that could potentially lead to a crash. In this ongoing work, a drone has been successfully trained to perform an ascent of up to 6 meters at speeds with external disturbances pushing it up to 24 mph, with the DQN algorithm managing external forces affecting the system. It has been demonstrated that the system can control its height, position, and stability in all three axes (roll, pitch, and yaw) throughout the process. The learning process is carried out in the Gazebo simulator, which emulates interferences, while ROS is used to communicate with the agent.

Keywords: machine learning, DQN, Gazebo, navigation

Procedia PDF Downloads 71
1423 Astronomical Object Classification

Authors: Alina Muradyan, Lina Babayan, Arsen Nanyan, Gohar Galstyan, Vigen Khachatryan

Abstract:

We present a photometric method for identifying stars, galaxies and quasars in multi-color surveys, which uses a library of ∼> 65000 color templates for comparison with observed objects. The method aims for extracting the information content of object colors in a statistically correct way, and performs a classification as well as a redshift estimation for galaxies and quasars in a unified approach based on the same probability density functions. For the redshift estimation, we employ an advanced version of the Minimum Error Variance estimator which determines the redshift error from the redshift dependent probability density function itself. The method was originally developed for the Calar Alto Deep Imaging Survey (CADIS), but is now used in a wide variety of survey projects. We checked its performance by spectroscopy of CADIS objects, where the method provides high reliability (6 errors among 151 objects with R < 24), especially for the quasar selection, and redshifts accurate within σz ≈ 0.03 for galaxies and σz ≈ 0.1 for quasars. For an optimization of future survey efforts, a few model surveys are compared, which are designed to use the same total amount of telescope time but different sets of broad-band and medium-band filters. Their performance is investigated by Monte-Carlo simulations as well as by analytic evaluation in terms of classification and redshift estimation. If photon noise were the only error source, broad-band surveys and medium-band surveys should perform equally well, as long as they provide the same spectral coverage. In practice, medium-band surveys show superior performance due to their higher tolerance for calibration errors and cosmic variance. Finally, we discuss the relevance of color calibration and derive important conclusions for the issues of library design and choice of filters. The calibration accuracy poses strong constraints on an accurate classification, which are most critical for surveys with few, broad and deeply exposed filters, but less severe for surveys with many, narrow and less deep filters.

Keywords: VO, ArVO, DFBS, FITS, image processing, data analysis

Procedia PDF Downloads 65
1422 Efficacy of Deep Learning for Below-Canopy Reconstruction of Satellite and Aerial Sensing Point Clouds through Fractal Tree Symmetry

Authors: Dhanuj M. Gandikota

Abstract:

Sensor-derived three-dimensional (3D) point clouds of trees are invaluable in remote sensing analysis for the accurate measurement of key structural metrics, bio-inventory values, spatial planning/visualization, and ecological modeling. Machine learning (ML) holds the potential in addressing the restrictive tradeoffs in cost, spatial coverage, resolution, and information gain that exist in current point cloud sensing methods. Terrestrial laser scanning (TLS) remains the highest fidelity source of both canopy and below-canopy structural features, but usage is limited in both coverage and cost, requiring manual deployment to map out large, forested areas. While aerial laser scanning (ALS) remains a reliable avenue of LIDAR active remote sensing, ALS is also cost-restrictive in deployment methods. Space-borne photogrammetry from high-resolution satellite constellations is an avenue of passive remote sensing with promising viability in research for the accurate construction of vegetation 3-D point clouds. It provides both the lowest comparative cost and the largest spatial coverage across remote sensing methods. However, both space-borne photogrammetry and ALS demonstrate technical limitations in the capture of valuable below-canopy point cloud data. Looking to minimize these tradeoffs, we explored a class of powerful ML algorithms called Deep Learning (DL) that show promise in recent research on 3-D point cloud reconstruction and interpolation. Our research details the efficacy of applying these DL techniques to reconstruct accurate below-canopy point clouds from space-borne and aerial remote sensing through learned patterns of tree species fractal symmetry properties and the supplementation of locally sourced bio-inventory metrics. From our dataset, consisting of tree point clouds obtained from TLS, we deconstructed the point clouds of each tree into those that would be obtained through ALS and satellite photogrammetry of varying resolutions. We fed this ALS/satellite point cloud dataset, along with the simulated local bio-inventory metrics, into the DL point cloud reconstruction architectures to generate the full 3-D tree point clouds (the truth values are denoted by the full TLS tree point clouds containing the below-canopy information). Point cloud reconstruction accuracy was validated both through the measurement of error from the original TLS point clouds as well as the error of extraction of key structural metrics, such as crown base height, diameter above root crown, and leaf/wood volume. The results of this research additionally demonstrate the supplemental performance gain of using minimum locally sourced bio-inventory metric information as an input in ML systems to reach specified accuracy thresholds of tree point cloud reconstruction. This research provides insight into methods for the rapid, cost-effective, and accurate construction of below-canopy tree 3-D point clouds, as well as the supported potential of ML and DL to learn complex, unmodeled patterns of fractal tree growth symmetry.

Keywords: deep learning, machine learning, satellite, photogrammetry, aerial laser scanning, terrestrial laser scanning, point cloud, fractal symmetry

Procedia PDF Downloads 96
1421 Experiments on Residual Compressive Strength After Fatigue of Carbon Fiber Fabric Composites in Hydrothermal Environment

Authors: Xuan Sun, Mingbo Tong

Abstract:

In order to study the effect of hydrothermal environment on the fatigue properties of carbon fiber fabric composites, the experiments on fatigue and residual compressive strength with the center-hole laminates were carried out. For the experiments on fatigue in hydrothermal environment, an environmental chamber used for hydrothermal environment was designed, and the FLUENT was used to simulate the field of temperature in the environmental chamber, it proved that the design met the test requirements. In accordance with ASTM standard, the fatigue test fixture and compression test fixture were designed and produced. Then the tension-compression fatigue tests were carried out in conditions of standard environment (temperature of 23+2℃, relative humidity of 50+/-5%RH) and hydrothermal environment (temperature of 70 +2℃, relative humidity of 85+/-5%RH). After that, the residual compressive strength tests were carried out, respectively. The residual compressive strength after fatigue in condition of standard environment was set as a reference value, compared with the value in condition of hydrothermal environment, calculating the difference between them. According to the result of residual compressive strength tests, it shows that the residual compressive strength after fatigue in condition of hydrothermal environment was decreased by 13.5%,so the hydrothermal environment has little effect on the residual compressive strength of carbon fiber fabric composites laminates after fatigue under load spectrum in this research.

Keywords: carbon fiber, hydrothermal environment, fatigue, residual compressive strength

Procedia PDF Downloads 478
1420 Dynamic Reliability for a Complex System and Process: Application on Offshore Platform in Mozambique

Authors: Raed KOUTA, José-Alcebiades-Ernesto HLUNGUANE, Eric Châtele

Abstract:

The search for and exploitation of new fossil energy resources is taking place in the context of the gradual depletion of existing deposits. Despite the adoption of international targets to combat global warming, the demand for fuels continues to grow, contradicting the movement towards an energy-efficient society. The increase in the share of offshore in global hydrocarbon production tends to compensate for the depletion of terrestrial reserves, thus constituting a major challenge for the players in the sector. Through the economic potential it represents, and the energy independence it provides, offshore exploitation is also a challenge for States such as Mozambique, which have large maritime areas and whose environmental wealth must be considered. The exploitation of new reserves on economically viable terms depends on available technologies. The development of deep and ultra-deep offshore requires significant research and development efforts. Progress has also been made in managing the multiple risks inherent in this activity. Our study proposes a reliability approach to develop products and processes designed to live at sea. Indeed, the context of an offshore platform requires highly reliable solutions to overcome the difficulties of access to the system for regular maintenance and quick repairs and which must resist deterioration and degradation processes. One of the characteristics of failures that we consider is the actual conditions of use that are considered 'extreme.' These conditions depend on time and the interactions between the different causes. These are the two factors that give the degradation process its dynamic character, hence the need to develop dynamic reliability models. Our work highlights mathematical models that can explicitly manage interactions between components and process variables. These models are accompanied by numerical resolution methods that help to structure a dynamic reliability approach in a physical and probabilistic context. The application developed makes it possible to evaluate the reliability, availability, and maintainability of a floating storage and unloading platform for liquefied natural gas production.

Keywords: dynamic reliability, offshore plateform, stochastic process, uncertainties

Procedia PDF Downloads 115
1419 Photo Electrical Response in Graphene Based Resistive Sensor

Authors: H. C. Woo, F. Bouanis, C. S. Cojocaur

Abstract:

Graphene, which consists of a single layer of carbon atoms in a honeycomb lattice, is an interesting potential optoelectronic material because of graphene’s high carrier mobility, zero bandgap, and electron–hole symmetry. Graphene can absorb light and convert it into a photocurrent over a wide range of the electromagnetic spectrum, from the ultraviolet to visible and infrared regimes. Over the last several years, a variety of graphene-based photodetectors have been reported, such as graphene transistors, graphene-semiconductor heterojunction photodetectors, graphene based bolometers. It is also reported that there are several physical mechanisms enabling photodetection: photovoltaic effect, photo-thermoelectric effect, bolometric effect, photogating effect, and so on. In this work, we report a simple approach for the realization of graphene based resistive photo-detection devices and the measurements of their photoelectrical response. The graphene were synthesized directly on the glass substrate by novel growth method patented in our lab. Then, the metal electrodes were deposited by thermal evaporation on it, with an electrode length and width of 1.5 mm and 300 μm respectively, using Co to fabricate simple graphene based resistive photosensor. The measurements show that the graphene resistive devices exhibit a photoresponse to the illumination of visible light. The observed re-sistance response was reproducible and similar after many cycles of on and off operations. This photoelectrical response may be attributed not only to the direct photocurrent process but also to the desorption of oxygen. Our work shows that the simple graphene resistive devices have potential in photodetection applications.

Keywords: graphene, resistive sensor, optoelectronics, photoresponse

Procedia PDF Downloads 283
1418 A Study on Evaluation for Performance Verification of Ni-63 Radioisotope Betavoltaic Battery

Authors: Youngmok Yun, Bosung Kim, Sungho Lee, Kyeongsu Jeon, Hyunwook Hwangbo, Byounggun Choi

Abstract:

A betavoltaic battery converts nuclear energy released as beta particles (β-) directly into electrical energy. Betavoltaic cells are analogous to photovoltaic cells. The beta particle’s kinetic energy enters a p-n junction and creates electron-hole pairs. Subsequently, the built-in potential of the p-n junction accelerates the electrons and ions to their respective collectors. The major challenges are electrical conversion efficiencies and exact evaluation. In this study, the performance of betavoltaic battery was evaluated. The betavoltaic cell was evaluated in the same condition as radiation from radioactive isotope using by FE-SEM(field emission scanning electron microscope). The average energy of the radiation emitted from the Ni-63 radioisotope is 17.42 keV. FE-SEM is capable of emitting an electron beam of 1-30keV. Therefore, it is possible to evaluate betavoltaic cell without radioactive isotopes. The betavoltaic battery consists of radioisotope that is physically connected on the surface of Si-based PN diode. The performance of betavoltaic battery can be estimated by the efficiency of PN diode unit cell. The current generated by scanning electron microscope with fixed accelerating voltage (17keV) was measured by using faraday cup. Electrical characterization of the p-n junction diode was performed by using Nano Probe Work Station and I-V measurement system. The output value of the betavoltaic cells developed by this research team was 0.162 μw/cm2 and the efficiency was 1.14%.

Keywords: betavoltaic, nuclear, battery, Ni-63, radio-isotope

Procedia PDF Downloads 252
1417 An Experimental Study on the Effect of Operating Parameters during the Micro-Electro-Discharge Machining of Ni Based Alloy

Authors: Asma Perveen, M. P. Jahan

Abstract:

Ni alloys have managed to cover wide range of applications such as automotive industries, oil gas industries, and aerospace industries. However, these alloys impose challenges while using conventional machining technologies. On the other hand, Micro-Electro-Discharge machining (micro-EDM) is a non-conventional machining method that uses controlled sparks energy to remove material irrespective of the materials hardness. There has been always a huge interest from the industries for developing optimum methodology and parameters in order to enhance the productivity of micro-EDM in terms of reducing machining time and tool wear for different alloys. Therefore, the aims of this study are to investigate the effects of the micro-EDM process parameters, in order to find their optimal values. The input process parameters include voltage, capacitance, and electrode rotational speed, whereas the output parameters considered are machining time, entrance diameter of hole, overcut, tool wear, and crater size. The surface morphology and element characterization are also investigated with the use of SEM and EDX analysis. The experimental result indicates the reduction of machining time with the increment of discharge energy. Discharge energy also contributes to the enlargement of entrance diameter as well as overcut. In addition, tool wears show reduction with the increase of discharge energy. Moreover, crater size is found to be increased in size along with the increment of discharge energy.

Keywords: micro holes, micro EDM, Ni Alloy, discharge energy

Procedia PDF Downloads 271
1416 Enhanced Multi-Scale Feature Extraction Using a DCNN by Proposing Dynamic Soft Margin SoftMax for Face Emotion Detection

Authors: Armin Nabaei, M. Omair Ahmad, M. N. S. Swamy

Abstract:

Many facial expression and emotion recognition methods in the traditional approaches of using LDA, PCA, and EBGM have been proposed. In recent years deep learning models have provided a unique platform addressing by automatically extracting the features for the detection of facial expression and emotions. However, deep networks require large training datasets to extract automatic features effectively. In this work, we propose an efficient emotion detection algorithm using face images when only small datasets are available for training. We design a deep network whose feature extraction capability is enhanced by utilizing several parallel modules between the input and output of the network, each focusing on the extraction of different types of coarse features with fined grained details to break the symmetry of produced information. In fact, we leverage long range dependencies, which is one of the main drawback of CNNs. We develop this work by introducing a Dynamic Soft-Margin SoftMax.The conventional SoftMax suffers from reaching to gold labels very soon, which take the model to over-fitting. Because it’s not able to determine adequately discriminant feature vectors for some variant class labels. We reduced the risk of over-fitting by using a dynamic shape of input tensor instead of static in SoftMax layer with specifying a desired Soft- Margin. In fact, it acts as a controller to how hard the model should work to push dissimilar embedding vectors apart. For the proposed Categorical Loss, by the objective of compacting the same class labels and separating different class labels in the normalized log domain.We select penalty for those predictions with high divergence from ground-truth labels.So, we shorten correct feature vectors and enlarge false prediction tensors, it means we assign more weights for those classes with conjunction to each other (namely, “hard labels to learn”). By doing this work, we constrain the model to generate more discriminate feature vectors for variant class labels. Finally, for the proposed optimizer, our focus is on solving weak convergence of Adam optimizer for a non-convex problem. Our noteworthy optimizer is working by an alternative updating gradient procedure with an exponential weighted moving average function for faster convergence and exploiting a weight decay method to help drastically reducing the learning rate near optima to reach the dominant local minimum. We demonstrate the superiority of our proposed work by surpassing the first rank of three widely used Facial Expression Recognition datasets with 93.30% on FER-2013, and 16% improvement compare to the first rank after 10 years, reaching to 90.73% on RAF-DB, and 100% k-fold average accuracy for CK+ dataset, and shown to provide a top performance to that provided by other networks, which require much larger training datasets.

Keywords: computer vision, facial expression recognition, machine learning, algorithms, depp learning, neural networks

Procedia PDF Downloads 72
1415 Extraction of Amorphous SiO₂ From Equisetnm Arvense Plant for Synthesis of SiO₂/Zeolitic Imidazolate Framework-8 Nanocomposite and Its Photocatalytic Activity

Authors: Babak Azari, Afshin Pourahmad, Babak Sadeghi, Masuod Mokhtari

Abstract:

In this work, Equisetnm arvense plant extract was used for preparing amorphous SiO₂. For preparing of SiO₂/zeolitic imidazolate framework-8 (ZIF-8) nanocomposite by solvothermal method, the synthesized SiO₂ was added to the synthesis mixture ZIF-8. The nanocomposite was characterized using a range of techniques. The photocatalytic activity of SiO₂/ZIF-8 was investigated systematically by degrading crystal violet as a cationic dye under Ultraviolet light irradiation. Among synthesized samples (SiO₂, ZIF-8 and SiO₂/ZIF-8), the SiO₂/ZIF-8 exhibited the highest photocatalytic activity and improved stability compared to pure SiO₂ and ZIF-8. As evidenced by Scanning Electron Microscopy and Transmission electron microscopy images, ZIF-8 particles without aggregation are located over SiO₂. The SiO₂ not only provides structured support for ZIF-8 but also prevents the aggregation of ZIF-8 Metal-organic framework in comparison to the isolated ZIF-8. The superior activity of this photocatalyst was attributed to the synergistic effects from SiO₂ owing to (I) an electron acceptor (from ZIF-8) and an electron donor (to O₂ molecules), (II) preventing recombination of electron-hole in ZIF-8, and (III) maximum interfacial contact ZIF-8 with the SiO₂ surface without aggregation or prevent the accumulation of ZIF-8. The results demonstrate that holes (h+) and •O₂- are primary reactive species involved in the photocatalytic oxidation process. Moreover, the SiO₂/ZIF-8 photocatalyst did not show any obvious loss of photocatalytic activity during five-cycle tests, which indicates that the heterostructured photocatalyst was highly stable and could be used repeatedly.

Keywords: nano, zeolit, potocatalist, nanocomposite

Procedia PDF Downloads 72
1414 Prediction of Cutting Tool Life in Drilling of Reinforced Aluminum Alloy Composite Using a Fuzzy Method

Authors: Mohammed T. Hayajneh

Abstract:

Machining of Metal Matrix Composites (MMCs) is very significant process and has been a main problem that draws many researchers to investigate the characteristics of MMCs during different machining process. The poor machining properties of hard particles reinforced MMCs make drilling process a rather interesting task. Unlike drilling of conventional materials, many problems can be seriously encountered during drilling of MMCs, such as tool wear and cutting forces. Cutting tool wear is a very significant concern in industries. Cutting tool wear not only influences the quality of the drilled hole, but also affects the cutting tool life. Prediction the cutting tool life during drilling is essential for optimizing the cutting conditions. However, the relationship between tool life and cutting conditions, tool geometrical factors and workpiece material properties has not yet been established by any machining theory. In this research work, fuzzy subtractive clustering system has been used to model the cutting tool life in drilling of Al2O3 particle reinforced aluminum alloy composite to investigate of the effect of cutting conditions on cutting tool life. This investigation can help in controlling and optimizing of cutting conditions when the process parameters are adjusted. The built model for prediction the tool life is identified by using drill diameter, cutting speed, and cutting feed rate as input data. The validity of the model was confirmed by the examinations under various cutting conditions. Experimental results have shown the efficiency of the model to predict cutting tool life.

Keywords: composite, fuzzy, tool life, wear

Procedia PDF Downloads 289
1413 The Introduction of the Revolution Einstein’s Relative Energy Equations in Even 2n and Odd 3n Light Dimension Energy States Systems

Authors: Jiradeach Kalayaruan, Tosawat Seetawan

Abstract:

This paper studied the energy of the nature systems by looking at the overall image throughout the universe. The energy of the nature systems was developed from the Einstein’s energy equation. The researcher used the new ideas called even 2n and odd 3n light dimension energy states systems, which were developed from Einstein’s relativity energy theory equation. In this study, the major methodology the researchers used was the basic principle ideas or beliefs of some religions such as Buddhism, Christianity, Hinduism, Islam, or Tao in order to get new discoveries. The basic beliefs of each religion - Nivara, God, Ether, Atman, and Tao respectively, were great influential ideas on the researchers to use them greatly in the study to form new ideas from philosophy. Since the philosophy of each religion was alive with deep insight of the physical nature relative energy, it connected the basic beliefs to light dimension energy states systems. Unfortunately, Einstein’s original relative energy equation showed only even 2n light dimension energy states systems (if n = 1,…,∞). But in advance ideas, the researchers multiplied light dimension energy by Einstein’s original relative energy equation and get new idea of theoritical physics in odd 3n light dimension energy states systems (if n = 1,…,∞). Because from basic principle ideas or beliefs of some religions philosophy of each religion, you had to add the media light dimension energy into Einstein’s original relative energy equation. Consequently, the simple meaning picture in deep insight showed that you could touch light dimension energy of Nivara, God, Ether, Atman, and Tao by light dimension energy. Since light dimension energy was transferred by Nivara, God, Ether, Atman and Tao, the researchers got the new equation of odd 3n light dimension energy states systems. Moreover, the researchers expected to be able to solve overview problems of all light dimension energy in all nature relative energy, which are developed from Eistein’s relative energy equation.The finding of the study was called 'super nature relative energy' ( in odd 3n light dimension energy states systems (if n = 1,…,∞)). From the new ideas above you could do the summation of even 2n and odd 3n light dimension energy states systems in all of nature light dimension energy states systems. In the future time, the researchers will expect the new idea to be used in insight theoretical physics, which is very useful to the development of quantum mechanics, all engineering, medical profession, transportation, communication, scientific inventions, and technology, etc.

Keywords: 2n light dimension energy states systems effect, Ether, even 2n light dimension energy states systems, nature relativity, Nivara, odd 3n light dimension energy states systems, perturbation points energy, relax point energy states systems, stress perturbation energy states systems effect, super relative energy

Procedia PDF Downloads 337
1412 Multimedia Design in Tactical Play Learning and Acquisition for Elite Gaelic Football Practitioners

Authors: Michael McMahon

Abstract:

The use of media (video/animation/graphics) has long been used by athletes, coaches, and sports scientists to analyse and improve performance in technical skills and team tactics. Sports educators are increasingly open to the use of technology to support coach and learner development. However, an overreliance is a concern., This paper is part of a larger Ph.D. study looking into these new challenges for Sports Educators. Most notably, how to exploit the deep-learning potential of Digital Media among expert learners, how to instruct sports educators to create effective media content that fosters deep learning, and finally, how to make the process manageable and cost-effective. Central to the study is Richard Mayers Cognitive Theory of Multimedia Learning. Mayers Multimedia Learning Theory proposes twelve principles that shape the design and organization of multimedia presentations to improve learning and reduce cognitive load. For example, the Prior Knowledge principle suggests and highlights different learning outcomes for Novice and Non-Novice learners, respectively. Little research, however, is available to support this principle in modified domains (e.g., sports tactics and strategy). As a foundation for further research, this paper compares and contrasts a range of contemporary multimedia sports coaching content and assesses how they perform as learning tools for Strategic and Tactical Play Acquisition among elite sports practitioners. The stress tests applied are guided by Mayers's twelve Multimedia Learning Principles. The focus is on the elite athletes and whether current coaching digital media content does foster improved sports learning among this cohort. The sport of Gaelic Football was selected as it has high strategic and tactical play content, a wide range of Practitioner skill levels (Novice to Elite), and also a significant volume of Multimedia Coaching Content available for analysis. It is hoped the resulting data will help identify and inform the future instructional content design and delivery for Sports Practitioners and help promote best design practices optimal for different levels of expertise.

Keywords: multimedia learning, e-learning, design for learning, ICT

Procedia PDF Downloads 98
1411 BERT-Based Chinese Coreference Resolution

Authors: Li Xiaoge, Wang Chaodong

Abstract:

We introduce the first Chinese Coreference Resolution Model based on BERT (CCRM-BERT) and show that it significantly outperforms all previous work. The key idea is to consider the features of the mention, such as part of speech, width of spans, distance between spans, etc. And the influence of each features on the model is analyzed. The model computes mention embeddings that combine BERT with features. Compared to the existing state-of-the-art span-ranking approach, our model significantly improves accuracy on the Chinese OntoNotes benchmark.

Keywords: BERT, coreference resolution, deep learning, nature language processing

Procedia PDF Downloads 204
1410 Fatigue Analysis of Spread Mooring Line

Authors: Chanhoe Kang, Changhyun Lee, Seock-Hee Jun, Yeong-Tae Oh

Abstract:

Offshore floating structure under the various environmental conditions maintains a fixed position by mooring system. Environmental conditions, vessel motions and mooring loads are applied to mooring lines as the dynamic tension. Because global responses of mooring system in deep water are specified as wave frequency and low frequency response, they should be calculated from the time-domain analysis due to non-linear dynamic characteristics. To take into account all mooring loads, environmental conditions, added mass and damping terms at each time step, a lot of computation time and capacities are required. Thus, under the premise that reliable fatigue damage could be derived through reasonable analysis method, it is necessary to reduce the analysis cases through the sensitivity studies and appropriate assumptions. In this paper, effects in fatigue are studied for spread mooring system connected with oil FPSO which is positioned in deep water of West Africa offshore. The target FPSO with two Mbbls storage has 16 spread mooring lines (4 bundles x 4 lines). The various sensitivity studies are performed for environmental loads, type of responses, vessel offsets, mooring position, loading conditions and riser behavior. Each parameter applied to the sensitivity studies is investigated from the effects of fatigue damage through fatigue analysis. Based on the sensitivity studies, the following results are presented: Wave loads are more dominant in terms of fatigue than other environment conditions. Wave frequency response causes the higher fatigue damage than low frequency response. The larger vessel offset increases the mean tension and so it results in the increased fatigue damage. The external line of each bundle shows the highest fatigue damage by the governed vessel pitch motion due to swell wave conditions. Among three kinds of loading conditions, ballast condition has the highest fatigue damage due to higher tension. The riser damping occurred by riser behavior tends to reduce the fatigue damage. The various analysis results obtained from these sensitivity studies can be used for a simplified fatigue analysis of spread mooring line as the reference.

Keywords: mooring system, fatigue analysis, time domain, non-linear dynamic characteristics

Procedia PDF Downloads 330
1409 Hybrid Temporal Correlation Based on Gaussian Mixture Model Framework for View Synthesis

Authors: Deng Zengming, Wang Mingjiang

Abstract:

As 3D video is explored as a hot research topic in the last few decades, free-viewpoint TV (FTV) is no doubt a promising field for its better visual experience and incomparable interactivity. View synthesis is obviously a crucial technology for FTV; it enables to render images in unlimited numbers of virtual viewpoints with the information from limited numbers of reference view. In this paper, a novel hybrid synthesis framework is proposed and blending priority is explored. In contrast to the commonly used View Synthesis Reference Software (VSRS), the presented synthesis process is driven in consideration of the temporal correlation of image sequences. The temporal correlations will be exploited to produce fine synthesis results even near the foreground boundaries. As for the blending priority, this scheme proposed that one of the two reference views is selected to be the main reference view based on the distance between the reference views and virtual view, another view is chosen as the auxiliary viewpoint, just assist to fill the hole pixel with the help of background information. Significant improvement of the proposed approach over the state-of –the-art pixel-based virtual view synthesis method is presented, the results of the experiments show that subjective gains can be observed, and objective PSNR average gains range from 0.5 to 1.3 dB, while SSIM average gains range from 0.01 to 0.05.

Keywords: fusion method, Gaussian mixture model, hybrid framework, view synthesis

Procedia PDF Downloads 244
1408 ANAC-id - Facial Recognition to Detect Fraud

Authors: Giovanna Borges Bottino, Luis Felipe Freitas do Nascimento Alves Teixeira

Abstract:

This article aims to present a case study of the National Civil Aviation Agency (ANAC) in Brazil, ANAC-id. ANAC-id is the artificial intelligence algorithm developed for image analysis that recognizes standard images of unobstructed and uprighted face without sunglasses, allowing to identify potential inconsistencies. It combines YOLO architecture and 3 libraries in python - face recognition, face comparison, and deep face, providing robust analysis with high level of accuracy.

Keywords: artificial intelligence, deepface, face compare, face recognition, YOLO, computer vision

Procedia PDF Downloads 148
1407 Semi-Supervised Learning for Spanish Speech Recognition Using Deep Neural Networks

Authors: B. R. Campomanes-Alvarez, P. Quiros, B. Fernandez

Abstract:

Automatic Speech Recognition (ASR) is a machine-based process of decoding and transcribing oral speech. A typical ASR system receives acoustic input from a speaker or an audio file, analyzes it using algorithms, and produces an output in the form of a text. Some speech recognition systems use Hidden Markov Models (HMMs) to deal with the temporal variability of speech and Gaussian Mixture Models (GMMs) to determine how well each state of each HMM fits a short window of frames of coefficients that represents the acoustic input. Another way to evaluate the fit is to use a feed-forward neural network that takes several frames of coefficients as input and produces posterior probabilities over HMM states as output. Deep neural networks (DNNs) that have many hidden layers and are trained using new methods have been shown to outperform GMMs on a variety of speech recognition systems. Acoustic models for state-of-the-art ASR systems are usually training on massive amounts of data. However, audio files with their corresponding transcriptions can be difficult to obtain, especially in the Spanish language. Hence, in the case of these low-resource scenarios, building an ASR model is considered as a complex task due to the lack of labeled data, resulting in an under-trained system. Semi-supervised learning approaches arise as necessary tasks given the high cost of transcribing audio data. The main goal of this proposal is to develop a procedure based on acoustic semi-supervised learning for Spanish ASR systems by using DNNs. This semi-supervised learning approach consists of: (a) Training a seed ASR model with a DNN using a set of audios and their respective transcriptions. A DNN with a one-hidden-layer network was initialized; increasing the number of hidden layers in training, to a five. A refinement, which consisted of the weight matrix plus bias term and a Stochastic Gradient Descent (SGD) training were also performed. The objective function was the cross-entropy criterion. (b) Decoding/testing a set of unlabeled data with the obtained seed model. (c) Selecting a suitable subset of the validated data to retrain the seed model, thereby improving its performance on the target test set. To choose the most precise transcriptions, three confidence scores or metrics, regarding the lattice concept (based on the graph cost, the acoustic cost and a combination of both), was performed as selection technique. The performance of the ASR system will be calculated by means of the Word Error Rate (WER). The test dataset was renewed in order to extract the new transcriptions added to the training dataset. Some experiments were carried out in order to select the best ASR results. A comparison between a GMM-based model without retraining and the DNN proposed system was also made under the same conditions. Results showed that the semi-supervised ASR-model based on DNNs outperformed the GMM-model, in terms of WER, in all tested cases. The best result obtained an improvement of 6% relative WER. Hence, these promising results suggest that the proposed technique could be suitable for building ASR models in low-resource environments.

Keywords: automatic speech recognition, deep neural networks, machine learning, semi-supervised learning

Procedia PDF Downloads 335