Search results for: very high resolution (VHR)
19851 Fire Resistance Capacity of Reinforced Concrete Member Strengthened by Fiber Reinforced Polymer
Authors: Soo-Yeon Seo, Jong-Wook Lim, Se-Ki Song
Abstract:
Currently, FRP (Fiber Reinforced Polymer) materials have been widely used for reinforcement of building structural members. However, since the FRP and the epoxy material for attaching it have very low resistance to heat, there is a problem in application where high temperature is an issue. In this paper, the resistance performance of FRP member made of carbon fiber at high temperature was investigated through experiment under temperature change. As a result, epoxy encapsulating FRP is damaged at not high temperatures, and the fibers are degraded. Therefore, when reinforcing a structure using FRP, a separate refractory heat treatment is necessary. The use of a 30 mm thick calcium silicate board as a fireproofing method can protect FRP up to 600ᵒC outside temperature.Keywords: FRP (Fiber Reinforced Polymer), high temperature, experiment under temperature change, calcium silicate board
Procedia PDF Downloads 39619850 Analysis of Composite Health Risk Indicators Built at a Regional Scale and Fine Resolution to Detect Hotspot Areas
Authors: Julien Caudeville, Muriel Ismert
Abstract:
Analyzing the relationship between environment and health has become a major preoccupation for public health as evidenced by the emergence of the French national plans for health and environment. These plans have identified the following two priorities: (1) to identify and manage geographic areas, where hotspot exposures are suspected to generate a potential hazard to human health; (2) to reduce exposure inequalities. At a regional scale and fine resolution of exposure outcome prerequisite, environmental monitoring networks are not sufficient to characterize the multidimensionality of the exposure concept. In an attempt to increase representativeness of spatial exposure assessment approaches, risk composite indicators could be built using additional available databases and theoretical framework approaches to combine factor risks. To achieve those objectives, combining data process and transfer modeling with a spatial approach is a fundamental prerequisite that implies the need to first overcome different scientific limitations: to define interest variables and indicators that could be built to associate and describe the global source-effect chain; to link and process data from different sources and different spatial supports; to develop adapted methods in order to improve spatial data representativeness and resolution. A GIS-based modeling platform for quantifying human exposure to chemical substances (PLAINE: environmental inequalities analysis platform) was used to build health risk indicators within the Lorraine region (France). Those indicators combined chemical substances (in soil, air and water) and noise risk factors. Tools have been developed using modeling, spatial analysis and geostatistic methods to build and discretize interest variables from different supports and resolutions on a 1 km2 regular grid within the Lorraine region. By example, surface soil concentrations have been estimated by developing a Kriging method able to integrate surface and point spatial supports. Then, an exposure model developed by INERIS was used to assess the transfer from soil to individual exposure through ingestion pathways. We used distance from polluted soil site to build a proxy for contaminated site. Air indicator combined modeled concentrations and estimated emissions to take in account 30 polluants in the analysis. For water, drinking water concentrations were compared to drinking water standards to build a score spatialized using a distribution unit serve map. The Lden (day-evening-night) indicator was used to map noise around road infrastructures. Aggregation of the different factor risks was made using different methodologies to discuss weighting and aggregation procedures impact on the effectiveness of risk maps to take decisions for safeguarding citizen health. Results permit to identify pollutant sources, determinants of exposure, and potential hotspots areas. A diagnostic tool was developed for stakeholders to visualize and analyze the composite indicators in an operational and accurate manner. The designed support system will be used in many applications and contexts: (1) mapping environmental disparities throughout the Lorraine region; (2) identifying vulnerable population and determinants of exposure to set priorities and target for pollution prevention, regulation and remediation; (3) providing exposure database to quantify relationships between environmental indicators and cancer mortality data provided by French Regional Health Observatories.Keywords: health risk, environment, composite indicator, hotspot areas
Procedia PDF Downloads 24919849 Maintaining the Formal Type of West Java's Heritage Language with Sundanese Language Lesson in Senior High School
Authors: Dinda N. Lestari
Abstract:
Sundanese language is one of heritage language in Indonesia that must be maintained especially the formal type of it because teenagers nowadays do not speak Sundanese language formally in their daily lives. To maintain it, Cultural and Education Ministry of Indonesia has input Sundanese language lesson at senior high school in West Java area. The aim of this study was to observe whether the existence of Sundanese language lesson in senior high school in the big town of Karawang, West Java - Indonesia give the contribution to the formal type of Sundanese language maintenance or not. For gathering the data, the researcher interviewed the senior high school students who have learned Sundanese language to observe their acquisition of it. As a result of the interview, the data was presented in qualitative research by using the interviewing method. Then, the finding indicated that the existence of Sundanese language in Senior High School also the educational program which is related to it, for instance, Kemis Nyunda seemed to do not effective enough in maintaining the formal type of Sundanese language. Therefore, West Java government must revise the learning strategy of it, including the role of the Sundanese language teacher.Keywords: heritage language, language maintenance and shift, senior high school, Sundanese language, Sundanese language lesson
Procedia PDF Downloads 15019848 Investigation of Different Machine Learning Algorithms in Large-Scale Land Cover Mapping within the Google Earth Engine
Authors: Amin Naboureh, Ainong Li, Jinhu Bian, Guangbin Lei, Hamid Ebrahimy
Abstract:
Large-scale land cover mapping has become a new challenge in land change and remote sensing field because of involving a big volume of data. Moreover, selecting the right classification method, especially when there are different types of landscapes in the study area is quite difficult. This paper is an attempt to compare the performance of different machine learning (ML) algorithms for generating a land cover map of the China-Central Asia–West Asia Corridor that is considered as one of the main parts of the Belt and Road Initiative project (BRI). The cloud-based Google Earth Engine (GEE) platform was used for generating a land cover map for the study area from Landsat-8 images (2017) by applying three frequently used ML algorithms including random forest (RF), support vector machine (SVM), and artificial neural network (ANN). The selected ML algorithms (RF, SVM, and ANN) were trained and tested using reference data obtained from MODIS yearly land cover product and very high-resolution satellite images. The finding of the study illustrated that among three frequently used ML algorithms, RF with 91% overall accuracy had the best result in producing a land cover map for the China-Central Asia–West Asia Corridor whereas ANN showed the worst result with 85% overall accuracy. The great performance of the GEE in applying different ML algorithms and handling huge volume of remotely sensed data in the present study showed that it could also help the researchers to generate reliable long-term land cover change maps. The finding of this research has great importance for decision-makers and BRI’s authorities in strategic land use planning.Keywords: land cover, google earth engine, machine learning, remote sensing
Procedia PDF Downloads 11319847 The Developmental Model of Self-Efficacy Emotional Intelligence and Social Maturity among High School Boys and Girls
Authors: Shrikant Chavan, Vikas Minchekar
Abstract:
The present study examined the self-efficacy, emotional intelligence and social maturity of High school boys and girls. Furthermore, study aimed at to foster the self-efficacy, emotional intelligence and social maturity of high school students. The study was conducted on 100 high school students, out of which 50 boys and 50 girls were selected through simple random sampling method from the Sangli city of Maharashtra state, India. The age range of the sample is 14 to 16 years. Self-efficacy scale developed by Jesusalem Schwarzer, Emotional intelligence scale developed by Hyde, Pethe and Dhar and social maturity scale developed by Rao were administered to the sample. Data was analyzed using mean, SD and ‘t’ test further Karl Pearson’s product moment, correlation of coefficient was used to know the correlation between emotional intelligence, self-efficacy, and social maturity. Results revealed that boys and girls did not differ significantly in their self-efficacy and social maturity. Further, the analysis revealed that girls are having high emotional intelligence compared to boys, which is significant at 0.01 level. It is also found that there is a significant and positive correlation between self-efficacy and emotional intelligence, self-efficacy and social maturity and emotional intelligence and social maturity. Some developmental strategies to strengthen the self-efficacy, emotional intelligence and social maturity of high school students are suggested in the study.Keywords: self-efficacy, emotional intelligence, social maturity, developmental model and high school students
Procedia PDF Downloads 46919846 Learning Based on Computer Science Unplugged in Computer Science Education: Design, Development, and Assessment
Authors: Eiko Takaoka, Yoshiyuki Fukushima, Koichiro Hirose, Tadashi Hasegawa
Abstract:
Although all high school students in Japan are required to learn informatics, many of them do not learn this topic sufficiently. In response to this situation, we propose a support package for high school informatics classes. To examine what students learned and if they sufficiently understood the context of the lessons, a questionnaire survey was distributed to 186 students. We analyzed the results of the questionnaire and determined the weakest units, which were “basic computer configuration” and “memory and secondary storage”. We then developed a package for teaching these units. We propose that our package be applied in high school classrooms.Keywords: computer science unplugged, computer science outreach, high school curriculum, experimental evaluation
Procedia PDF Downloads 38919845 Comparison of Support Vector Machines and Artificial Neural Network Classifiers in Characterizing Threatened Tree Species Using Eight Bands of WorldView-2 Imagery in Dukuduku Landscape, South Africa
Authors: Galal Omer, Onisimo Mutanga, Elfatih M. Abdel-Rahman, Elhadi Adam
Abstract:
Threatened tree species (TTS) play a significant role in ecosystem functioning and services, land use dynamics, and other socio-economic aspects. Such aspects include ecological, economic, livelihood, security-based, and well-being benefits. The development of techniques for mapping and monitoring TTS is thus critical for understanding the functioning of ecosystems. The advent of advanced imaging systems and supervised learning algorithms has provided an opportunity to classify TTS over fragmenting landscape. Recently, vegetation maps have been produced using advanced imaging systems such as WorldView-2 (WV-2) and robust classification algorithms such as support vectors machines (SVM) and artificial neural network (ANN). However, delineation of TTS in a fragmenting landscape using high resolution imagery has widely remained elusive due to the complexity of the species structure and their distribution. Therefore, the objective of the current study was to examine the utility of the advanced WV-2 data for mapping TTS in the fragmenting Dukuduku indigenous forest of South Africa using SVM and ANN classification algorithms. The results showed the robustness of the two machine learning algorithms with an overall accuracy (OA) of 77.00% (total disagreement = 23.00%) for SVM and 75.00% (total disagreement = 25.00%) for ANN using all eight bands of WV-2 (8B). This study concludes that SVM and ANN classification algorithms with WV-2 8B have the potential to classify TTS in the Dukuduku indigenous forest. This study offers relatively accurate information that is important for forest managers to make informed decisions regarding management and conservation protocols of TTS.Keywords: artificial neural network, threatened tree species, indigenous forest, support vector machines
Procedia PDF Downloads 51519844 Quantitative Evaluation of Supported Catalysts Key Properties from Electron Tomography Studies: Assessing Accuracy Using Material-Realistic 3D-Models
Authors: Ainouna Bouziane
Abstract:
The ability of Electron Tomography to recover the 3D structure of catalysts, with spatial resolution in the subnanometer scale, has been widely explored and reviewed in the last decades. A variety of experimental techniques, based either on Transmission Electron Microscopy (TEM) or Scanning Transmission Electron Microscopy (STEM) have been used to reveal different features of nanostructured catalysts in 3D, but High Angle Annular Dark Field imaging in STEM mode (HAADF-STEM) stands out as the most frequently used, given its chemical sensitivity and avoidance of imaging artifacts related to diffraction phenomena when dealing with crystalline materials. In this regard, our group has developed a methodology that combines image denoising by undecimated wavelet transforms (UWT) with automated, advanced segmentation procedures and parameter selection methods using CS-TVM (Compressed Sensing-total variation minimization) algorithms to reveal more reliable quantitative information out of the 3D characterization studies. However, evaluating the accuracy of the magnitudes estimated from the segmented volumes is also an important issue that has not been properly addressed yet, because a perfectly known reference is needed. The problem particularly complicates in the case of multicomponent material systems. To tackle this key question, we have developed a methodology that incorporates volume reconstruction/segmentation methods. In particular, we have established an approach to evaluate, in quantitative terms, the accuracy of TVM reconstructions, which considers the influence of relevant experimental parameters like the range of tilt angles, image noise level or object orientation. The approach is based on the analysis of material-realistic, 3D phantoms, which include the most relevant features of the system under analysis.Keywords: electron tomography, supported catalysts, nanometrology, error assessment
Procedia PDF Downloads 8819843 In Search of High Growth: Mapping out Academic Spin-Off´s Performance in Catalonia
Abstract:
This exploratory study gives an overview of the evolution of the main financial and performance indicators of the Academic Spin-Off’s and High Growth Academic Spin-Off’s in year 3 and year 6 after its creation in the region of Catalonia in Spain. The study compares and evaluates results of these different measures of performance and the degree of success of these companies for each University. We found that the average Catalonian Academic Spin-Off is small and have not achieved the sustainability stage at year 6. On the contrary, a small group of High Growth Academic Spin-Off’s exhibit robust performance with high profits in year 6. Our results support the need to increase selectivity and support for these companies especially near year 3, because are the ones that will bring wealth and employment. University role as an investor has rigid norms and habits that impede an efficient economic return from their ASO investment. Universities with high performance on sales and employment in year 3 not always could sustain this growth in year 6 because their ASO’s are not profitable. On the contrary, profitable ASO exhibit superior performance in all measurement indicators in year 6. We advocate the need of a balanced growth (with profits) as a way to obtain subsequent continuous growth.Keywords: Academic Spin-Off (ASO), university entrepreneurship, entrepreneurial university, high growth, New Technology Based Companies (NTBC), University Spin-Off
Procedia PDF Downloads 45819842 Convolutional Neural Network Based on Random Kernels for Analyzing Visual Imagery
Authors: Ja-Keoung Koo, Kensuke Nakamura, Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Byung-Woo Hong
Abstract:
The machine learning techniques based on a convolutional neural network (CNN) have been actively developed and successfully applied to a variety of image analysis tasks including reconstruction, noise reduction, resolution enhancement, segmentation, motion estimation, object recognition. The classical visual information processing that ranges from low level tasks to high level ones has been widely developed in the deep learning framework. It is generally considered as a challenging problem to derive visual interpretation from high dimensional imagery data. A CNN is a class of feed-forward artificial neural network that usually consists of deep layers the connections of which are established by a series of non-linear operations. The CNN architecture is known to be shift invariant due to its shared weights and translation invariance characteristics. However, it is often computationally intractable to optimize the network in particular with a large number of convolution layers due to a large number of unknowns to be optimized with respect to the training set that is generally required to be large enough to effectively generalize the model under consideration. It is also necessary to limit the size of convolution kernels due to the computational expense despite of the recent development of effective parallel processing machinery, which leads to the use of the constantly small size of the convolution kernels throughout the deep CNN architecture. However, it is often desired to consider different scales in the analysis of visual features at different layers in the network. Thus, we propose a CNN model where different sizes of the convolution kernels are applied at each layer based on the random projection. We apply random filters with varying sizes and associate the filter responses with scalar weights that correspond to the standard deviation of the random filters. We are allowed to use large number of random filters with the cost of one scalar unknown for each filter. The computational cost in the back-propagation procedure does not increase with the larger size of the filters even though the additional computational cost is required in the computation of convolution in the feed-forward procedure. The use of random kernels with varying sizes allows to effectively analyze image features at multiple scales leading to a better generalization. The robustness and effectiveness of the proposed CNN based on random kernels are demonstrated by numerical experiments where the quantitative comparison of the well-known CNN architectures and our models that simply replace the convolution kernels with the random filters is performed. The experimental results indicate that our model achieves better performance with less number of unknown weights. The proposed algorithm has a high potential in the application of a variety of visual tasks based on the CNN framework. Acknowledgement—This work was supported by the MISP (Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by IITP, and NRF-2014R1A2A1A11051941, NRF2017R1A2B4006023.Keywords: deep learning, convolutional neural network, random kernel, random projection, dimensionality reduction, object recognition
Procedia PDF Downloads 29119841 Optimized Electron Diffraction Detection and Data Acquisition in Diffraction Tomography: A Complete Solution by Gatan
Authors: Saleh Gorji, Sahil Gulati, Ana Pakzad
Abstract:
Continuous electron diffraction tomography, also known as microcrystal electron diffraction (MicroED) or three-dimensional electron diffraction (3DED), is a powerful technique, which in combination with cryo-electron microscopy (cryo-ED), can provide atomic-scale 3D information about the crystal structure and composition of different classes of crystalline materials such as proteins, peptides, and small molecules. Unlike the well-established X-ray crystallography method, 3DED does not require large single crystals and can collect accurate electron diffraction data from crystals as small as 50 – 100 nm. This is a critical advantage as growing larger crystals, as required by X-ray crystallography methods, is often very difficult, time-consuming, and expensive. In most cases, specimens studied via 3DED method are electron beam sensitive, which means there is a limitation on the maximum amount of electron dose one can use to collect the required data for a high-resolution structure determination. Therefore, collecting data using a conventional scintillator-based fiber coupled camera brings additional challenges. This is because of the inherent noise introduced during the electron-to-photon conversion in the scintillator and transfer of light via the fibers to the sensor, which results in a poor signal-to-noise ratio and requires a relatively higher and commonly specimen-damaging electron dose rates, especially for protein crystals. As in other cryo-EM techniques, damage to the specimen can be mitigated if a direct detection camera is used which provides a high signal-to-noise ratio at low electron doses. In this work, we have used two classes of such detectors from Gatan, namely the K3® camera (a monolithic active pixel sensor) and Stela™ (that utilizes DECTRIS hybrid-pixel technology), to address this problem. The K3 is an electron counting detector optimized for low-dose applications (like structural biology cryo-EM), and Stela is also a counting electron detector but optimized for diffraction applications with high speed and high dynamic range. Lastly, data collection workflows, including crystal screening, microscope optics setup (for imaging and diffraction), stage height adjustment at each crystal position, and tomogram acquisition, can be one of the other challenges of the 3DED technique. Traditionally this has been all done manually or in a partly automated fashion using open-source software and scripting, requiring long hours on the microscope (extra cost) and extensive user interaction with the system. We have recently introduced Latitude® D in DigitalMicrograph® software, which is compatible with all pre- and post-energy-filter Gatan cameras and enables 3DED data acquisition in an automated and optimized fashion. Higher quality 3DED data enables structure determination with higher confidence, while automated workflows allow these to be completed considerably faster than before. Using multiple examples, this work will demonstrate how to direct detection electron counting cameras enhance 3DED results (3 to better than 1 Angstrom) for protein and small molecule structure determination. We will also show how Latitude D software facilitates collecting such data in an integrated and fully automated user interface.Keywords: continuous electron diffraction tomography, direct detection, diffraction, Latitude D, Digitalmicrograph, proteins, small molecules
Procedia PDF Downloads 10719840 Ex-Post Export Data for Differentiated Products Revealing the Existence of Productcycles
Authors: Ranajoy Bhattcharyya
Abstract:
We estimate international product cycles as shifting product spaces by using 1976 to 2010 UN Comtrade data on all differentiated tradable products in all countries. We use a product space approach to identify the representative product baskets of high-, middle and low-income countries and then use these baskets to identify the patterns of change in comparative advantage of countries over time. We find evidence of a product cycle in two senses: First, high-, middle- and low-income countries differ in comparative advantage, and high-income products migrate to the middle-income basket. A similar pattern is observed for middle- and low-income countries. Our estimation of the lag shows that middle-income countries tend to quickly take up the products of high-income countries, but low-income countries take a longer time absorbing these products. Thus, the gap between low- and middle-income countries is considerably higher than that between middle- and high-income nations.Keywords: product cycle, comparative advantage, representative product basket, ex-post data
Procedia PDF Downloads 42119839 Evaluating Structural Crack Propagation Induced by Soundless Chemical Demolition Agent Using an Energy Release Rate Approach
Authors: Shyaka Eugene
Abstract:
The efficient and safe demolition of structures is a critical challenge in civil engineering and construction. This study focuses on the development of optimal demolition strategies by investigating the crack propagation behavior in beams induced by soundless cracking agents. It is commonly used in controlled demolition and has gained prominence due to its non-explosive and environmentally friendly nature. This research employs a comprehensive experimental and computational approach to analyze the crack initiation, propagation, and eventual failure in beams subjected to soundless cracking agents. Experimental testing involves the application of various cracking agents under controlled conditions to understand their effects on the structural integrity of beams. High-resolution imaging and strain measurements are used to capture the crack propagation process. In parallel, numerical simulations are conducted using advanced finite element analysis (FEA) techniques to model crack propagation in beams, considering various parameters such as cracking agent composition, loading conditions, and beam properties. The FEA models are validated against experimental results, ensuring their accuracy in predicting crack propagation patterns. The findings of this study provide valuable insights into optimizing demolition strategies, allowing engineers and demolition experts to make informed decisions regarding the selection of cracking agents, their application techniques, and structural reinforcement methods. Ultimately, this research contributes to enhancing the safety, efficiency, and sustainability of demolition practices in the construction industry, reducing environmental impact and ensuring the protection of adjacent structures and the surrounding environment.Keywords: expansion pressure, energy release rate, soundless chemical demolition agent, crack propagation
Procedia PDF Downloads 6319838 A Comparison between Artificial Neural Network Prediction Models for Coronal Hole Related High Speed Streams
Authors: Rehab Abdulmajed, Amr Hamada, Ahmed Elsaid, Hisashi Hayakawa, Ayman Mahrous
Abstract:
Solar emissions have a high impact on the Earth’s magnetic field, and the prediction of solar events is of high interest. Various techniques have been used in the prediction of solar wind using mathematical models, MHD models, and neural network (NN) models. This study investigates the coronal hole (CH) derived high-speed streams (HSSs) and their correlation to the CH area and create a neural network model to predict the HSSs. Two different algorithms were used to compare different models to find a model that best simulates the HSSs. A dataset of CH synoptic maps through Carrington rotations 1601 to 2185 along with Omni-data set solar wind speed averaged over the Carrington rotations is used, which covers Solar cycles (sc) 21, 22, 23, and most of 24.Keywords: artificial neural network, coronal hole area, feed-forward neural network models, solar high speed streams
Procedia PDF Downloads 8919837 Deformation of Metallic Foams with Closed Cell at High Temperatures
Authors: Emrah Ersoy, Yusuf Ozcatalbas
Abstract:
The aim of this study is to investigate formability of Al based closed cell metallic foams at high temperature. The foam specimens with rectangular section were produced from AlMg1Si0.6TiH20.8 alloy preform material. Bending and free bending tests based on gravity effect were applied to foam specimens at high temperatures. During the tests, the time-angular deformation relationships with various temperatures were determined. Deformation types formed in cell walls were investigated by means of Scanning Electron Microscopy (SEM) and optical microscopy. Bending deformation about 90° was achieved without any defect at high temperatures. The importance of a critical temperature and deformation rate was emphasized in maintaining the deformation. Significant slip lines on surface of cell walls at tensile zones of bending specimen were observed. At high strain rates, the microcrack formation in boundaries of elongated grains was determined.Keywords: Al alloy, Closed cell, Hot deformation, Metallic foam
Procedia PDF Downloads 36919836 INCIPIT-CRIS: A Research Information System Combining Linked Data Ontologies and Persistent Identifiers
Authors: David Nogueiras Blanco, Amir Alwash, Arnaud Gaudinat, René Schneider
Abstract:
At a time when the access to and the sharing of information are crucial in the world of research, the use of technologies such as persistent identifiers (PIDs), Current Research Information Systems (CRIS), and ontologies may create platforms for information sharing if they respond to the need of disambiguation of their data by assuring interoperability inside and between other systems. INCIPIT-CRIS is a continuation of the former INCIPIT project, whose goal was to set up an infrastructure for a low-cost attribution of PIDs with high granularity based on Archival Resource Keys (ARKs). INCIPIT-CRIS can be interpreted as a logical consequence and propose a research information management system developed from scratch. The system has been created on and around the Schema.org ontology with a further articulation of the use of ARKs. It is thus built upon the infrastructure previously implemented (i.e., INCIPIT) in order to enhance the persistence of URIs. As a consequence, INCIPIT-CRIS aims to be the hinge between previously separated aspects such as CRIS, ontologies and PIDs in order to produce a powerful system allowing the resolution of disambiguation problems using a combination of an ontology such as Schema.org and unique persistent identifiers such as ARK, allowing the sharing of information through a dedicated platform, but also the interoperability of the system by representing the entirety of the data as RDF triplets. This paper aims to present the implemented solution as well as its simulation in real life. We will describe the underlying ideas and inspirations while going through the logic and the different functionalities implemented and their links with ARKs and Schema.org. Finally, we will discuss the tests performed with our project partner, the Swiss Institute of Bioinformatics (SIB), by the use of large and real-world data sets.Keywords: current research information systems, linked data, ontologies, persistent identifier, schema.org, semantic web
Procedia PDF Downloads 13619835 An Assessment of Floodplain Vegetation Response to Groundwater Changes Using the Soil & Water Assessment Tool Hydrological Model, Geographic Information System, and Machine Learning in the Southeast Australian River Basin
Authors: Newton Muhury, Armando A. Apan, Tek N. Marasani, Gebiaw T. Ayele
Abstract:
The changing climate has degraded freshwater availability in Australia that influencing vegetation growth to a great extent. This study assessed the vegetation responses to groundwater using Terra’s moderate resolution imaging spectroradiometer (MODIS), Normalised Difference Vegetation Index (NDVI), and soil water content (SWC). A hydrological model, SWAT, has been set up in a southeast Australian river catchment for groundwater analysis. The model was calibrated and validated against monthly streamflow from 2001 to 2006 and 2007 to 2010, respectively. The SWAT simulated soil water content for 43 sub-basins and monthly MODIS NDVI data for three different types of vegetation (forest, shrub, and grass) were applied in the machine learning tool, Waikato Environment for Knowledge Analysis (WEKA), using two supervised machine learning algorithms, i.e., support vector machine (SVM) and random forest (RF). The assessment shows that different types of vegetation response and soil water content vary in the dry and wet seasons. The WEKA model generated high positive relationships (r = 0.76, 0.73, and 0.81) between NDVI values of all vegetation in the sub-basins against soil water content (SWC), the groundwater flow (GW), and the combination of these two variables, respectively, during the dry season. However, these responses were reduced by 36.8% (r = 0.48) and 13.6% (r = 0.63) against GW and SWC, respectively, in the wet season. Although the rainfall pattern is highly variable in the study area, the summer rainfall is very effective for the growth of the grass vegetation type. This study has enriched our knowledge of vegetation responses to groundwater in each season, which will facilitate better floodplain vegetation management.Keywords: ArcSWAT, machine learning, floodplain vegetation, MODIS NDVI, groundwater
Procedia PDF Downloads 10119834 Effect of Phosphorus and Potassium Nutrition on Growth, Yield and Minerals Accumulation of Two Soybean Cultivars Differing in Phytate Contents
Authors: Taliman Nisar Ahmad, Hirofume Saneoka
Abstract:
A pot experiment was conducted to investigate the effect of phosphorus (P) and potassium (K) nutrition on grain yield, phytic acid and grain quality of high-phytate (Akimaro) and low-phytate line. Phosphorus and potassium were applied as; P₁ (20 kg ha⁻¹) and P₂ (100 kg ha⁻¹), same as K₁ (20 kg ha⁻¹) and K₂ (100 kg ha⁻¹), respectively. Low-phytate soybean had the highest grain yield, and 75% increase was observed compared to the high-phytate under same treatments. Highly significant differences of seed phytate P were observed in both cultivars, and the phytate P in high-phytate was found 39% higher than low-phytate, whereas no significant differences observed in response to P and K treatment. Percentage of phytate P from total P in seeds was 28 to 35% in low-phytate and 72 to 81% in high-phytate in different treatments. The lipid content in low-phytate was found lowered compared to that of high-phytate. Crude protein in grains was also found significantly higher in PK combined. No significant difference was observed in seed calcium (Ca), magnesium (Mg), and Zinc (Zn) in different treatments, but high-phytate showed 87% increase in seed Ca and 76% of Mg compared to low-phytate; however, low-phytate showed 82% increase in Zn content over high-phytate. The result illustrates that low-phytate soybean achieved higher grain yield and grain Pi in response to increased P and K nutrition. To achieve higher yield and quality seeds from the low-phytate soybean, it is recommended that proper phosphorus and potassium nutrition to be applied suggested in this study.Keywords: phytic acid, low-phytate soybean, high-phytate soybean, P and K nutrition, protein content, soybean
Procedia PDF Downloads 13519833 Strategic Communication in Turkish Independence War
Authors: Özkan Özgenç, Serdar Hacisalihoğlu, Murat Yanik
Abstract:
History has shown that quantitative and qualitative supremacy in terms of military and economic power has been inadequate to reach the desired results. In addition, public support has been a crucial requirement for the success of the any struggle. As a leader seeking ways for the independence of the country, Ataturk comprehended that the only solution was possible with the help of public will and determination. Ataturk needed an impeccable communication strategy to combine efforts by establishing a united notion and action; to convince the world and Turkish nation of the legitimacy and sacredness of Independence struggle; and to show the resolution and determination of Turkish nation against the invaders. To emancipate the Turkish nation, Ataturk shaped the nation's emotions, ideas, and behaviors by using the most appropriate tools at the best time and place since the start of Independence War in May 19, 1919.Keywords: Atatürk, Turkish independence struggle, strategic communication, independence war
Procedia PDF Downloads 29519832 The Effect of Acute Aerobic Exercise after Consumption of Four Different Diets on Serum Levels Irisin, Insulin and Glucose in Overweight Men
Authors: Majid Mardaniyan Ghahfarokhi, Abdolhamid Habibi, Majid Mohammad Shahi
Abstract:
The combination of exercise and diet as the most important strategy to reduce weight and control obesity-related factors, including Irisin, Insulin, and Glucose was raised. The aim of this study was to investigate the effect of aerobic exercise combined with four different diets on serum levels of Irisin, Insulin, and Glucose in overweight men. Methods: In this quasi-experimental study, 8 overweight men (BMI 29.23±0.47) with average age of (23±1.6) voluntarily participated in 4 sessions by one-week interval. The study was done in exercise physiology lab. In each session, subjects performed a 30 minutes treadmill test with 60-70% of maximum heart rate, after consuming a high carbohydrate, high-fat, high-protein and normal diet. For biochemical measurement, three blood samples were taken in fasting state, two hours after meals and after exercise Results: Statistical analysis of data showed that the serum levels of Irisin after consumption all four diets had been reduced which this reduce as a result of high-fat diet that were significantly (p ≤ 0/038). Serum concentration of Insulin and Glucose increased after consuming four diets. However, increase in serum Insulin and Glucose was significant only after consuming high-carbohydrate diet (Respectively p ≤ 0/001, p ≤ 0/042). In addition, during exercise after consuming all four regular diet, high carbohydrate, high-protein and high-fat, Irisin significant increased significantly (Respectively p ≤ 0/021, p ≤ 0/049, p ≤ 0/001, P ≤ 0/003), Insulin decreased significantly (Respectively p ≤ 0/002, p ≤ 0/001, p ≤ 0/001, p ≤ 0/002) and Glucose were significantly reduced (Respectively p ≤ 0/001, p ≤ 0/001, P ≤ 0/001, p ≤ 0/002). After aerobic activity following the consumption of a high protein diet the highest increase in irisin levels, and after aerobic exercise following consumption of high carbohydrate diet the greatest decrease in insulin and glucose levels were observed. Conclusion: It seems that diet alone and exercises following different consumption diets can have a significant effect on Irisin, Insulin, and Glucose serum levels in overweight young men.Keywords: acute aerobic exercise, diet, irisin, overweight
Procedia PDF Downloads 26019831 Novel Approach to Design of a Class-EJ Power Amplifier Using High Power Technology
Authors: F. Rahmani, F. Razaghian, A. R. Kashaninia
Abstract:
This article proposes a new method for application in communication circuit systems that increase efficiency, PAE, output power and gain in the circuit. The proposed method is based on a combination of switching class-E and class-J and has been termed class-EJ. This method was investigated using both theory and simulation to confirm ~72% PAE and output power of > 39 dBm. The combination and design of the proposed power amplifier accrues gain of over 15dB in the 2.9 to 3.5 GHz frequency bandwidth. This circuit was designed using MOSFET and high power transistors. The load- and source-pull method achieved the best input and output networks using lumped elements. The proposed technique was investigated for fundamental and second harmonics having desirable amplitudes for the output signal.Keywords: power amplifier (PA), high power, class-J and class-E, high efficiency
Procedia PDF Downloads 49319830 Introduction of the Fluid-Structure Coupling into the Force Analysis Technique
Authors: Océane Grosset, Charles Pézerat, Jean-Hugh Thomas, Frédéric Ablitzer
Abstract:
This paper presents a method to take into account the fluid-structure coupling into an inverse method, the Force Analysis Technique (FAT). The FAT method, also called RIFF method (Filtered Windowed Inverse Resolution), allows to identify the force distribution from local vibration field. In order to only identify the external force applied on a structure, it is necessary to quantify the fluid-structure coupling, especially in naval application, where the fluid is heavy. This method can be decomposed in two parts, the first one consists in identifying the fluid-structure coupling and the second one to introduced it in the FAT method to reconstruct the external force. Results of simulations on a plate coupled with a cavity filled with water are presented.Keywords: aeroacoustics, fluid-structure coupling, inverse methods, naval, turbulent flow
Procedia PDF Downloads 52019829 Performance Analysis of Transformerless DC-DC Boost Converter
Authors: Nidhi Vijay, A. K. Sharma
Abstract:
Many industrial applications require power from dc source. DC-DC boost converters are now being used all over the world for rapid transit system. Although these provide high efficiency, smooth control, fast response and regeneration, conventional DC-DC boost converters are unable to provide high step up voltage gain due to effect of power switches, rectifier diodes and equivalent series resistance of inductor and capacitor. This paper proposes new transformerless dc-dc converters to achieve high step up voltage gain as compared to the conventional converter without an extremely high duty ratio. Only one power stage is used in this converter. Steady-state analysis of voltage gain is discussed in brief. Finally, a comparative analysis is given in order to verify the results.Keywords: MATLAB, DC-DC boost converter, voltage gain, voltage stress
Procedia PDF Downloads 43019828 Efficient Video Compression Technique Using Convolutional Neural Networks and Generative Adversarial Network
Authors: P. Karthick, K. Mahesh
Abstract:
Video has become an increasingly significant component of our digital everyday contact. With the advancement of greater contents and shows of the resolution, its significant volume poses serious obstacles to the objective of receiving, distributing, compressing, and revealing video content of high quality. In this paper, we propose the primary beginning to complete a deep video compression model that jointly upgrades all video compression components. The video compression method involves splitting the video into frames, comparing the images using convolutional neural networks (CNN) to remove duplicates, repeating the single image instead of the duplicate images by recognizing and detecting minute changes using generative adversarial network (GAN) and recorded with long short-term memory (LSTM). Instead of the complete image, the small changes generated using GAN are substituted, which helps in frame level compression. Pixel wise comparison is performed using K-nearest neighbours (KNN) over the frame, clustered with K-means, and singular value decomposition (SVD) is applied for each and every frame in the video for all three color channels [Red, Green, Blue] to decrease the dimension of the utility matrix [R, G, B] by extracting its latent factors. Video frames are packed with parameters with the aid of a codec and converted to video format, and the results are compared with the original video. Repeated experiments on several videos with different sizes, duration, frames per second (FPS), and quality results demonstrate a significant resampling rate. On average, the result produced had approximately a 10% deviation in quality and more than 50% in size when compared with the original video.Keywords: video compression, K-means clustering, convolutional neural network, generative adversarial network, singular value decomposition, pixel visualization, stochastic gradient descent, frame per second extraction, RGB channel extraction, self-detection and deciding system
Procedia PDF Downloads 18819827 Speeding-up Gray-Scale FIC by Moments
Authors: Eman A. Al-Hilo, Hawraa H. Al-Waelly
Abstract:
In this work, fractal compression (FIC) technique is introduced based on using moment features to block indexing the zero-mean range-domain blocks. The moment features have been used to speed up the IFS-matching stage. Its moments ratio descriptor is used to filter the domain blocks and keep only the blocks that are suitable to be IFS matched with tested range block. The results of tests conducted on Lena picture and Cat picture (256 pixels, resolution 24 bits/pixel) image showed a minimum encoding time (0.89 sec for Lena image and 0.78 of Cat image) with appropriate PSNR (30.01dB for Lena image and 29.8 of Cat image). The reduction in ET is about 12% for Lena and 67% for Cat image.Keywords: fractal gray level image, fractal compression technique, iterated function system, moments feature, zero-mean range-domain block
Procedia PDF Downloads 49719826 Exfoliation of Functionalized High Structural Integrity Graphene Nanoplatelets at Extremely Low Temperature
Authors: Mohannad N. H. Al-Malichi
Abstract:
Because of its exceptional properties, graphene has become the most promising nanomaterial for the development of a new generation of advanced materials from battery electrodes to structural composites. However, current methods to meet requirements for the mass production of high-quality graphene are limited by harsh oxidation, high temperatures, and tedious processing steps. To extend the scope of the bulk production of graphene, herein, a facile, reproducible and cost-effective approach has been developed. This involved heating a specific mixture of chemical materials at an extremely low temperature (70 C) for a short period (7 minutes) to exfoliate functionalized graphene platelets with high structural integrity. The obtained graphene platelets have an average thickness of 3.86±0.71 nm and a lateral size less than ~2 µm with a low defect intensity ID/IG ~0.06. The thin film (~2 µm thick) exhibited a low surface resistance of ~0.63 Ω/sq⁻¹, confirming its high electrical conductivity. Additionally, these nanoplatelets were decorated with polar functional groups (epoxy and carboxyl groups), thus have the potential to toughen and provide multifunctional polymer nanocomposites. Moreover, such a simple method can be further exploited for the novel exfoliation of other layered two-dimensional materials such as MXenes.Keywords: functionalized graphene nanoplatelets, high structural integrity graphene, low temperature exfoliation of graphene, functional graphene platelets
Procedia PDF Downloads 12019825 High Temperature Properties of Diffusion Brazed Joints of in 939 Ni-Base Superalloy
Authors: Hyunki Kang, Hi Won Jeong
Abstract:
The gas turbine operates for a long period of time under harsh, cyclic conditions of high temperature and pressure, where high turbine inlet temperature (TIT) can range from 1273 to 1873K. Therefore, Ni-base superalloys such as IN738, IN939, Rene 45, Rene 71, Rene 80, Mar M 247, CM 247, and CMSX-4 with excellent mechanical properties and resistance to creep, corrosion and oxidation at high temperatures are indeed used. Among the alloying additions for these alloys, aluminum (Al) and titanium (Ti) form gamma prime and enhance the high-temperature properties. However, when crack-damaged high-temperature turbine components such as blade and vane are repaired by fusion welding, they cause cracks. For example, when arc welding is applied to certain superalloys that contain Al and Ti with more than 3 wt.% and T3.5 wt%, respectively, such as IN738, IN939, Rene 80, Mar M 247, and CM 247, aging cracks occur. Therefore, repair technologies using diffusion brazing, which has less heat input into the base material, are being developed. Analysis of microstructural evolution of the brazed joints with a base metal of IN 939 Ni-base superalloy using brazing different filler metals was also carried out using X-ray diffraction, OEM, SEM-EDS, and EPMA. Stress rupture and high-temperature tensile strength properties were also measured to analyze the effects of different brazing heat cycles. The boron amount in the diffusion-affected zone (DAZ) was decreased towards the base metal and the formation of borides at grain boundaries was detected through EPMA.Keywords: gas turbine, diffusion brazing, superalloy, gas turbine repair
Procedia PDF Downloads 4219824 Spectral Mixture Model Applied to Cannabis Parcel Determination
Authors: Levent Basayigit, Sinan Demir, Yusuf Ucar, Burhan Kara
Abstract:
Many research projects require accurate delineation of the different land cover type of the agricultural area. Especially it is critically important for the definition of specific plants like cannabis. However, the complexity of vegetation stands structure, abundant vegetation species, and the smooth transition between different seconder section stages make vegetation classification difficult when using traditional approaches such as the maximum likelihood classifier. Most of the time, classification distinguishes only between trees/annual or grain. It has been difficult to accurately determine the cannabis mixed with other plants. In this paper, a mixed distribution models approach is applied to classify pure and mix cannabis parcels using Worldview-2 imagery in the Lakes region of Turkey. Five different land use types (i.e. sunflower, maize, bare soil, and cannabis) were identified in the image. A constrained Gaussian mixture discriminant analysis (GMDA) was used to unmix the image. In the study, 255 reflectance ratios derived from spectral signatures of seven bands (Blue-Green-Yellow-Red-Rededge-NIR1-NIR2) were randomly arranged as 80% for training and 20% for test data. Gaussian mixed distribution model approach is proved to be an effective and convenient way to combine very high spatial resolution imagery for distinguishing cannabis vegetation. Based on the overall accuracies of the classification, the Gaussian mixed distribution model was found to be very successful to achieve image classification tasks. This approach is sensitive to capture the illegal cannabis planting areas in the large plain. This approach can also be used for monitoring and determination with spectral reflections in illegal cannabis planting areas.Keywords: Gaussian mixture discriminant analysis, spectral mixture model, Worldview-2, land parcels
Procedia PDF Downloads 19719823 Film Dosimetry – An Asset for Collaboration Between Cancer Radiotherapy Centers at Established Institutions and Those Located in Low- and Middle-Income Countries
Authors: A. Fomujong, P. Mobit, A. Ndlovu, R. Teboh
Abstract:
Purpose: Film’s unique qualities, such as tissue equivalence, high spatial resolution, near energy independence and comparatively less expensive dosimeter, ought to make it the preferred and widely used in radiotherapy centers in low and middle income countries (LMICs). This, however, is not always the case, as other factors that are often maybe taken for granted in advanced radiotherapy centers remain a challenge in LMICs. We explored the unique qualities of film dosimetry that can make it possible for one Institution to benefit from another’s protocols via collaboration. Methods: For simplicity, two Institutions were considered in this work. We used a single batch of films (EBT-XD) and established a calibration protocol, including scan protocols and calibration curves, using the radiotherapy delivery system at Institution A. We then proceeded and performed patient-specific QA for patients treated on system A (PSQA-A-A). Films from the same batch were then sent to a remote center for PSQA on radiotherapy delivery system B. Irradiations were done at Institution B and then returned to Institution A for processing and analysis (PSQA-B-A). The following points were taken into consideration throughout the process (a) A reference film was irradiated to a known dose on the same system irradiating the PSQA film. (b) For calibration, we utilized the one-scan protocol and maintained the same scan orientation of the calibration, PSQA and reference films. Results: Gamma index analysis using a dose threshold of 10% and 3%/2mm criteria showed a gamma passing rate of 99.8% and 100% for the PSQA-A-A and PSQA-B-A, respectively. Conclusion: This work demonstrates that one could use established film dosimetry protocols in one Institution, e.g., an advanced radiotherapy center and apply similar accuracies to irradiations performed at another institution, e.g., a center located in LMIC, which thus encourages collaboration between the two for worldwide patient benefits.Keywords: collaboration, film dosimetry, LMIC, radiotherapy, calibration
Procedia PDF Downloads 7519822 An Application-Driven Procedure for Optimal Signal Digitization of Automotive-Grade Ultrasonic Sensors
Authors: Mohamed Shawki Elamir, Heinrich Gotzig, Raoul Zoellner, Patrick Maeder
Abstract:
In this work, a methodology is presented for identifying the optimal digitization parameters for the analog signal of ultrasonic sensors. These digitization parameters are the resolution of the analog to digital conversion and the sampling rate. This is accomplished through the derivation of characteristic curves based on Fano inequality and the calculation of the mutual information content over a given dataset. The mutual information is calculated between the examples in the dataset and the corresponding variation in the feature that needs to be estimated. The optimal parameters are identified in a manner that ensures optimal estimation performance while preventing inefficiency in using unnecessarily powerful analog to digital converters.Keywords: analog to digital conversion, digitization, sampling rate, ultrasonic
Procedia PDF Downloads 207