Search results for: parallel processing
437 The Use of Optical-Radar Remotely-Sensed Data for Characterizing Geomorphic, Structural and Hydrologic Features and Modeling Groundwater Prospective Zones in Arid Zones
Authors: Mohamed Abdelkareem
Abstract:
Remote sensing data contributed on predicting the prospective areas of water resources. Integration of microwave and multispectral data along with climatic, hydrologic, and geological data has been used here. In this article, Sentinel-2, Landsat-8 Operational Land Imager (OLI), Shuttle Radar Topography Mission (SRTM), Tropical Rainfall Measuring Mission (TRMM), and Advanced Land Observing Satellite (ALOS) Phased Array Type L‐band Synthetic Aperture Radar (PALSAR) data were utilized to identify the geological, hydrologic and structural features of Wadi Asyuti which represents a defunct tributary of the Nile basin, in the eastern Sahara. The image transformation of Sentinel-2 and Landsat-8 data allowed characterizing the different varieties of rock units. Integration of microwave remotely-sensed data and GIS techniques provided information on physical characteristics of catchments and rainfall zones that are of a crucial role for mapping groundwater prospective zones. A fused Landsat-8 OLI and ALOS/PALSAR data improved the structural elements that difficult to reveal using optical data. Lineament extraction and interpretation indicated that the area is clearly shaped by the NE-SW graben that is cut by NW-SE trend. Such structures allowed the accumulation of thick sediments in the downstream area. Processing of recent OLI data acquired on March 15, 2014, verified the flood potential maps and offered the opportunity to extract the extent of the flooding zone of the recent flash flood event (March 9, 2014), as well as revealed infiltration characteristics. Several layers including geology, slope, topography, drainage density, lineament density, soil characteristics, rainfall, and morphometric characteristics were combined after assigning a weight for each using a GIS-based knowledge-driven approach. The results revealed that the predicted groundwater potential zones (GPZs) can be arranged into six distinctive groups, depending on their probability for groundwater, namely very low, low, moderate, high very, high, and excellent. Field and well data validated the delineated zones.Keywords: GIS, remote sensing, groundwater, Egypt
Procedia PDF Downloads 98436 Satellite Interferometric Investigations of Subsidence Events Associated with Groundwater Extraction in Sao Paulo, Brazil
Authors: B. Mendonça, D. Sandwell
Abstract:
The Metropolitan Region of Sao Paulo (MRSP) has suffered from serious water scarcity. Consequently, the most convenient solution has been building wells to extract groundwater from local aquifers. However, it requires constant vigilance to prevent over extraction and future events that can pose serious threat to the population, such as subsidence. Radar imaging techniques (InSAR) have allowed continuous investigation of such phenomena. The analysis of data in the present study consists of 23 SAR images dated from October 2007 to March 2011, obtained by the ALOS-1 spacecraft. Data processing was made with the software GMTSAR, by using the InSAR technique to create pairs of interferograms with ground displacement during different time spans. First results show a correlation between the location of 102 wells registered in 2009 and signals of ground displacement equal or lower than -90 millimeters (mm) in the region. The longest time span interferogram obtained dates from October 2007 to March 2010. As a result, from that interferogram, it was possible to detect the average velocity of displacement in millimeters per year (mm/y), and which areas strong signals have persisted in the MRSP. Four specific areas with signals of subsidence of 28 mm/y to 40 mm/y were chosen to investigate the phenomenon: Guarulhos (Sao Paulo International Airport), the Greater Sao Paulo, Itaquera and Sao Caetano do Sul. The coverage area of the signals was between 0.6 km and 1.65 km of length. All areas are located above a sedimentary type of aquifer. Itaquera and Sao Caetano do Sul showed signals varying from 28 mm/y to 32 mm/y. On the other hand, the places most likely to be suffering from stronger subsidence are the ones in the Greater Sao Paulo and Guarulhos, right beside the International Airport of Sao Paulo. The rate of displacement observed in both regions goes from 35 mm/y to 40 mm/y. Previous investigations of the water use at the International Airport highlight the risks of excessive water extraction that was being done through 9 deep wells. Therefore, it is affirmed that subsidence events are likely to occur and to cause serious damage in the area. This study could show a situation that has not been explored with proper importance in the city, given its social and economic consequences. Since the data were only available until 2011, the question that remains is if the situation still persists. It could be reaffirmed, however, a scenario of risk at the International Airport of Sao Paulo that needs further investigation.Keywords: ground subsidence, Interferometric Satellite Aperture Radar (InSAR), metropolitan region of Sao Paulo, water extraction
Procedia PDF Downloads 354435 The Influence of Cognitive Load in the Acquisition of Words through Sentence or Essay Writing
Authors: Breno Barrreto Silva, Agnieszka Otwinowska, Katarzyna Kutylowska
Abstract:
Research comparing lexical learning following the writing of sentences and longer texts with keywords is limited and contradictory. One possibility is that the recursivity of writing may enhance processing and increase lexical learning; another possibility is that the higher cognitive load of complex-text writing (e.g., essays), at least when timed, may hinder the learning of words. In our study, we selected 2 sets of 10 academic keywords matched for part of speech, length (number of characters), frequency (SUBTLEXus), and concreteness, and we asked 90 L1-Polish advanced-level English majors to use the keywords when writing sentences, timed (60 minutes) or untimed essays. First, all participants wrote a timed Control essay (60 minutes) without keywords. Then different groups produced Timed essays (60 minutes; n=33), Untimed essays (n=24), or Sentences (n=33) using the two sets of glossed keywords (counterbalanced). The comparability of the participants in the three groups was ensured by matching them for proficiency in English (LexTALE), and for few measures derived from the control essay: VocD (assessing productive lexical diversity), normed errors (assessing productive accuracy), words per minute (assessing productive written fluency), and holistic scores (assessing overall quality of production). We measured lexical learning (depth and breadth) via an adapted Vocabulary Knowledge Scale (VKS) and a free association test. Cognitive load was measured in the three essays (Control, Timed, Untimed) using normed number of errors and holistic scores (TOEFL criteria). The number of errors and essay scores were obtained from two raters (interrater reliability Pearson’s r=.78-91). Generalized linear mixed models showed no difference in the breadth and depth of keyword knowledge after writing Sentences, Timed essays, and Untimed essays. The task-based measurements found that Control and Timed essays had similar holistic scores, but that Untimed essay had better quality than Timed essay. Also, Untimed essay was the most accurate, and Timed essay the most error prone. Concluding, using keywords in Timed, but not Untimed, essays increased cognitive load, leading to more errors and lower quality. Still, writing sentences and essays yielded similar lexical learning, and differences in the cognitive load between Timed and Untimed essays did not affect lexical acquisition.Keywords: learning academic words, writing essays, cognitive load, english as an L2
Procedia PDF Downloads 73434 Comparison of Different Methods of Microorganism's Identification from a Copper Mining in Pará, Brazil
Authors: Louise H. Gracioso, Marcela P.G. Baltazar, Ingrid R. Avanzi, Bruno Karolski, Luciana J. Gimenes, Claudio O. Nascimento, Elen A. Perpetuo
Abstract:
Introduction: Higher copper concentrations promote a selection pressure on organisms such as plants, fungi and bacteria, which allows surviving only the resistant organisms to the contaminated site. This selective pressure keeps only the organisms most resistant to a specific condition and subsequently increases their bioremediation potential. Despite the bacteria importance for biosphere maintenance, it is estimated that only a small fraction living microbial species has been described and characterized. Due to the molecular biology development, tools based on analysis 16S ribosomal RNA or another specific gene are making a new scenario for the characterization studies and identification of microorganisms in the environment. News identification of microorganisms methods have also emerged like Biotyper (MALDI / TOF), this method mass spectrometry is subject to the recognition of spectroscopic patterns of conserved and features proteins for different microbial species. In view of this, this study aimed to isolate bacteria resistant to copper present in a Copper Processing Area (Sossego Mine, Canaan, PA) and identifies them in two different methods: Recent (spectrometry mass) and conventional. This work aimed to use them for a future bioremediation of this Mining. Material and Methods: Samples were collected at fifteen different sites of five periods of times. Microorganisms were isolated from mining wastes by culture enrichment technique; this procedure was repeated 4 times. The isolates were inoculated into MJS medium containing different concentrations of chloride copper (1mM, 2.5mM, 5mM, 7.5mM and 10 mM) and incubated in plates for 72 h at 28 ºC. These isolates were subjected to mass spectrometry identification methods (Biotyper – MALDI/TOF) and 16S gene sequencing. Results: A total of 105 strains were isolated in this area, bacterial identification by mass spectrometry method (MALDI/TOF) achieved 74% agreement with the conventional identification method (16S), 31% have been unsuccessful in MALDI-TOF and 2% did not obtain identification sequence the 16S. These results show that Biotyper can be a very useful tool in the identification of bacteria isolated from environmental samples, since it has a better value for money (cheap and simple sample preparation and MALDI plates are reusable). Furthermore, this technique is more rentable because it saves time and has a high performance (the mass spectra are compared to the database and it takes less than 2 minutes per sample).Keywords: copper mining area, bioremediation, microorganisms, identification, MALDI/TOF, RNA 16S
Procedia PDF Downloads 377433 Clustering Ethno-Informatics of Naming Village in Java Island Using Data Mining
Authors: Atje Setiawan Abdullah, Budi Nurani Ruchjana, I. Gede Nyoman Mindra Jaya, Eddy Hermawan
Abstract:
Ethnoscience is used to see the culture with a scientific perspective, which may help to understand how people develop various forms of knowledge and belief, initially focusing on the ecology and history of the contributions that have been there. One of the areas studied in ethnoscience is etno-informatics, is the application of informatics in the culture. In this study the science of informatics used is data mining, a process to automatically extract knowledge from large databases, to obtain interesting patterns in order to obtain a knowledge. While the application of culture described by naming database village on the island of Java were obtained from Geographic Indonesia Information Agency (BIG), 2014. The purpose of this study is; first, to classify the naming of the village on the island of Java based on the structure of the word naming the village, including the prefix of the word, syllable contained, and complete word. Second to classify the meaning of naming the village based on specific categories, as well as its role in the community behavioral characteristics. Third, how to visualize the naming of the village to a map location, to see the similarity of naming villages in each province. In this research we have developed two theorems, i.e theorems area as a result of research studies have collected intersection naming villages in each province on the island of Java, and the composition of the wedge theorem sets the provinces in Java is used to view the peculiarities of a location study. The methodology in this study base on the method of Knowledge Discovery in Database (KDD) on data mining, the process includes preprocessing, data mining and post processing. The results showed that the Java community prioritizes merit in running his life, always working hard to achieve a more prosperous life, and love as well as water and environmental sustainment. Naming villages in each location adjacent province has a high degree of similarity, and influence each other. Cultural similarities in the province of Central Java, East Java and West Java-Banten have a high similarity, whereas in Jakarta-Yogyakarta has a low similarity. This research resulted in the cultural character of communities within the meaning of the naming of the village on the island of Java, this character is expected to serve as a guide in the behavior of people's daily life on the island of Java.Keywords: ethnoscience, ethno-informatics, data mining, clustering, Java island culture
Procedia PDF Downloads 283432 Polyphenol-Rich Aronia Melanocarpa Juice Consumption and Line-1 Dna Methylation in a Cohort at Cardiovascular Risk
Authors: Ljiljana Stojković, Manja Zec, Maja Zivkovic, Maja Bundalo, Marija Glibetić, Dragan Alavantić, Aleksandra Stankovic
Abstract:
Cardiovascular disease (CVD) is associated with alterations in DNA methylation, the latter modulated by dietary polyphenols. The present pilot study (part of the original clinical study registered as NCT02800967 at www.clinicaltrials.gov) aimed to investigate the impact of 4-week daily consumption of polyphenol-rich Aronia melanocarpa juice on Long Interspersed Nucleotide Element-1 (LINE-1) methylation in peripheral blood leukocytes, in subjects (n=34, age of 41.1±6.6 years) at moderate CVD risk, including an increased body mass index, central obesity, high normal blood pressure and/or dyslipidemia. The goal was also to examine whether factors known to affect DNA methylation, such as folate intake levels, MTHFR C677T gene variant, as well as the anthropometric and metabolic parameters, modulated the LINE-1 methylation levels upon consumption of polyphenol-rich Aronia juice. The experimental analysis of LINE-1 methylation was done by the MethyLight method. MTHFR C677T genotypes were determined by the polymerase chain reaction-restriction fragment length polymorphism method. Folate intake was assessed by processing the data from the food frequency questionnaire and repeated 24-hour dietary recalls. Serum lipid profile was determined by using Roche Diagnostics kits. The statistical analyses were performed using the Statistica software package. In women, after vs. before the treatment period, a significant decrease in LINE-1 methylation levels was observed (97.54±1.50% vs. 98.39±0.86%, respectively; P=0.01). The change (after vs. before treatment) in LINE-1 methylation correlated directly with MTHFR 677T allele presence, average daily folate intake and the change in serum low-density lipoprotein cholesterol, while inversely with the change in serum triacylglycerols (R=0.72, R2=0.52, adjusted R2=0.36, P=0.03). The current results imply potential cardioprotective effects of habitual polyphenol-rich Aronia juice consumption achieved through the modifications of DNA methylation pattern in subjects at CVD risk, which should be further confirmed. Hence, the precision nutrition-driven modulations of DNA methylation may become targets for new approaches in the prevention and treatment of CVD.Keywords: Aronia melanocarpa, cardiovascular risk, LINE-1, methylation, peripheral blood leukocytes, polyphenol
Procedia PDF Downloads 195431 Interpretation of the Russia-Ukraine 2022 War via N-Gram Analysis
Authors: Elcin Timur Cakmak, Ayse Oguzlar
Abstract:
This study presents the results of the tweets sent by Twitter users on social media about the Russia-Ukraine war by bigram and trigram methods. On February 24, 2022, Russian President Vladimir Putin declared a military operation against Ukraine, and all eyes were turned to this war. Many people living in Russia and Ukraine reacted to this war and protested and also expressed their deep concern about this war as they felt the safety of their families and their futures were at stake. Most people, especially those living in Russia and Ukraine, express their views on the war in different ways. The most popular way to do this is through social media. Many people prefer to convey their feelings using Twitter, one of the most frequently used social media tools. Since the beginning of the war, it is seen that there have been thousands of tweets about the war from many countries of the world on Twitter. These tweets accumulated in data sources are extracted using various codes for analysis through Twitter API and analysed by Python programming language. The aim of the study is to find the word sequences in these tweets by the n-gram method, which is known for its widespread use in computational linguistics and natural language processing. The tweet language used in the study is English. The data set consists of the data obtained from Twitter between February 24, 2022, and April 24, 2022. The tweets obtained from Twitter using the #ukraine, #russia, #war, #putin, #zelensky hashtags together were captured as raw data, and the remaining tweets were included in the analysis stage after they were cleaned through the preprocessing stage. In the data analysis part, the sentiments are found to present what people send as a message about the war on Twitter. Regarding this, negative messages make up the majority of all the tweets as a ratio of %63,6. Furthermore, the most frequently used bigram and trigram word groups are found. Regarding the results, the most frequently used word groups are “he, is”, “I, do”, “I, am” for bigrams. Also, the most frequently used word groups are “I, do, not”, “I, am, not”, “I, can, not” for trigrams. In the machine learning phase, the accuracy of classifications is measured by Classification and Regression Trees (CART) and Naïve Bayes (NB) algorithms. The algorithms are used separately for bigrams and trigrams. We gained the highest accuracy and F-measure values by the NB algorithm and the highest precision and recall values by the CART algorithm for bigrams. On the other hand, the highest values for accuracy, precision, and F-measure values are achieved by the CART algorithm, and the highest value for the recall is gained by NB for trigrams.Keywords: classification algorithms, machine learning, sentiment analysis, Twitter
Procedia PDF Downloads 73430 3D Simulation of Orthodontic Tooth Movement in the Presence of Horizontal Bone Loss
Authors: Azin Zargham, Gholamreza Rouhi, Allahyar Geramy
Abstract:
One of the most prevalent types of alveolar bone loss is horizontal bone loss (HBL) in which the bone height around teeth is reduced homogenously. In the presence of HBL the magnitudes of forces during orthodontic treatment should be altered according to the degree of HBL, in a way that without further bone loss, desired tooth movement can be obtained. In order to investigate the appropriate orthodontic force system in the presence of HBL, a three-dimensional numerical model capable of the simulation of orthodontic tooth movement was developed. The main goal of this research was to evaluate the effect of different degrees of HBL on a long-term orthodontic tooth movement. Moreover, the effect of different force magnitudes on orthodontic tooth movement in the presence of HBL was studied. Five three-dimensional finite element models of a maxillary lateral incisor with 0 mm, 1.5 mm, 3 mm, 4.5 mm and 6 mm of HBL were constructed. The long-term orthodontic tooth tipping movements were attained during a 4-weeks period in an iterative process through the external remodeling of the alveolar bone based on strains in periodontal ligament as the bone remodeling mechanical stimulus. To obtain long-term orthodontic tooth movement in each iteration, first the strains in periodontal ligament under a 1-N tipping force were calculated using finite element analysis. Then, bone remodeling and the subsequent tooth movement were computed in a post-processing software using a custom written program. Incisal edge, cervical, and apical area displacement in the models with different alveolar bone heights (0, 1.5, 3, 4.5, 6 mm bone loss) in response to a 1-N tipping force were calculated. Maximum tooth displacement was found to be 2.65 mm at the top of the crown of the model with a 6 mm bone loss. Minimum tooth displacement was 0.45 mm at the cervical level of the model with a normal bone support. Tooth tipping degrees of models in response to different tipping force magnitudes were also calculated for models with different degrees of HBL. Degrees of tipping tooth movement increased as force level was increased. This increase was more prominent in the models with smaller degrees of HBL. By using finite element method and bone remodeling theories, this study indicated that in the presence of HBL, under the same load, long-term orthodontic tooth movement will increase. The simulation also revealed that even though tooth movement increases with increasing the force, this increase was only prominent in the models with smaller degrees of HBL, and tooth models with greater degrees of HBL will be less affected by the magnitude of an orthodontic force. Based on our results, the applied force magnitude must be reduced in proportion of degree of HBL.Keywords: bone remodeling, finite element method, horizontal bone loss, orthodontic tooth movement.
Procedia PDF Downloads 342429 Measuring Human Perception and Negative Elements of Public Space Quality Using Deep Learning: A Case Study of Area within the Inner Road of Tianjin City
Authors: Jiaxin Shi, Kaifeng Hao, Qingfan An, Zeng Peng
Abstract:
Due to a lack of data sources and data processing techniques, it has always been difficult to quantify public space quality, which includes urban construction quality and how it is perceived by people, especially in large urban areas. This study proposes a quantitative research method based on the consideration of emotional health and physical health of the built environment. It highlights the low quality of public areas in Tianjin, China, where there are many negative elements. Deep learning technology is then used to measure how effectively people perceive urban areas. First, this work suggests a deep learning model that might simulate how people can perceive the quality of urban construction. Second, we perform semantic segmentation on street images to identify visual elements influencing scene perception. Finally, this study correlated the scene perception score with the proportion of visual elements to determine the surrounding environmental elements that influence scene perception. Using a small-scale labeled Tianjin street view data set based on transfer learning, this study trains five negative spatial discriminant models in order to explore the negative space distribution and quality improvement of urban streets. Then it uses all Tianjin street-level imagery to make predictions and calculate the proportion of negative space. Visualizing the spatial distribution of negative space along the Tianjin Inner Ring Road reveals that the negative elements are mainly found close to the five key districts. The map of Tianjin was combined with the experimental data to perform the visual analysis. Based on the emotional assessment, the distribution of negative materials, and the direction of street guidelines, we suggest guidance content and design strategy points of the negative phenomena in Tianjin street space in the two dimensions of perception and substance. This work demonstrates the utilization of deep learning techniques to understand how people appreciate high-quality urban construction, and it complements both theory and practice in urban planning. It illustrates the connection between human perception and the actual physical public space environment, allowing researchers to make urban interventions.Keywords: human perception, public space quality, deep learning, negative elements, street images
Procedia PDF Downloads 114428 An Experimental Investigation of the Cognitive Noise Influence on the Bistable Visual Perception
Authors: Alexander E. Hramov, Vadim V. Grubov, Alexey A. Koronovskii, Maria K. Kurovskaуa, Anastasija E. Runnova
Abstract:
The perception of visual signals in the brain was among the first issues discussed in terms of multistability which has been introduced to provide mechanisms for information processing in biological neural systems. In this work the influence of the cognitive noise on the visual perception of multistable pictures has been investigated. The study includes an experiment with the bistable Necker cube illusion and the theoretical background explaining the obtained experimental results. In our experiments Necker cubes with different wireframe contrast were demonstrated repeatedly to different people and the probability of the choice of one of the cubes projection was calculated for each picture. The Necker cube was placed at the middle of a computer screen as black lines on a white background. The contrast of the three middle lines centered in the left middle corner was used as one of the control parameter. Between two successive demonstrations of Necker cubes another picture was shown to distract attention and to make a perception of next Necker cube more independent from the previous one. Eleven subjects, male and female, of the ages 20 through 45 were studied. The choice of the Necker cube projection was detected with the Electroencephalograph-recorder Encephalan-EEGR-19/26, Medicom MTD. To treat the experimental results we carried out theoretical consideration using the simplest double-well potential model with the presence of noise that led to the Fokker-Planck equation for the probability density of the stochastic process. At the first time an analytical solution for the probability of the selection of one of the Necker cube projection for different values of wireframe contrast have been obtained. Furthermore, having used the results of the experimental measurements with the help of the method of least squares we have calculated the value of the parameter corresponding to the cognitive noise of the person being studied. The range of cognitive noise parameter values for studied subjects turned to be [0.08; 0.55]. It should be noted, that experimental results have a good reproducibility, the same person being studied repeatedly another day produces very similar data with very close levels of cognitive noise. We found an excellent agreement between analytically deduced probability and the results obtained in the experiment. A good qualitative agreement between theoretical and experimental results indicates that even such a simple model allows simulating brain cognitive dynamics and estimating important cognitive characteristic of the brain, such as brain noise.Keywords: bistability, brain, noise, perception, stochastic processes
Procedia PDF Downloads 445427 Colloids and Heavy Metals in Groundwaters: Tangential Flow Filtration Method for Study of Metal Distribution on Different Sizes of Colloids
Authors: Jiancheng Zheng
Abstract:
When metals are released into water from mining activities, they undergo changes chemically, physically and biologically and then may become more mobile and transportable along the waterway from their original sites. Natural colloids, including both organic and inorganic entities, are naturally occurring in any aquatic environment with sizes in the nanometer range. Natural colloids in a water system play an important role, quite often a key role, in binding and transporting compounds. When assessing and evaluating metals in natural waters, their sources, mobility, fate, and distribution patterns in the system are the major concerns from the point of view of assessing environmental contamination and pollution during resource development. There are a few ways to quantify colloids and accordingly study how metals distribute on different sizes of colloids. Current research results show that the presence of colloids can enhance the transport of some heavy metals in water, while heavy metals may also have an influence on the transport of colloids when cations in the water system change colloids and/or the ion strength of the water system changes. Therefore, studies into the relationship between different sizes of colloids and different metals in a water system are necessary and needed as natural colloids in water systems are complex mixtures of both organic and inorganic as well as biological materials. Their stability could be sensitive to changes in their shapes, phases, hardness and functionalities due to coagulation and deposition et al. and chemical, physical, and biological reactions. Because metal contaminants’ adsorption on surfaces of colloids is closely related to colloid properties, it is desired to fraction water samples as soon as possible after a sample is taken in the natural environment in order to avoid changes to water samples during transportation and storage. For this reason, this study carried out groundwater sample processing in the field, using Prep/Scale tangential flow filtration systems with 3-level cartridges (1 kDa, 10 kDa and 100 kDa). Groundwater samples from seven sites at Fort MacMurray, Alberta, Canada, were fractionated during the 2015 field sampling season. All samples were processed within 3 hours after samples were taken. Preliminary results show that although the distribution pattern of metals on colloids may vary with different samples taken from different sites, some elements often tend to larger colloids (such as Fe and Re), some to finer colloids (such as Sb and Zn), while some of them mainly in the dissolved form (such as Mo and Be). This information is useful to evaluate and project the fate and mobility of different metals in the groundwaters and possibly in environmental water systems.Keywords: metal, colloid, groundwater, mobility, fractionation, sorption
Procedia PDF Downloads 362426 Investigating the Effect of Metaphor Awareness-Raising Approach on the Right-Hemisphere Involvement in Developing Japanese Learners’ Knowledge of Different Degrees of Politeness
Authors: Masahiro Takimoto
Abstract:
The present study explored how the metaphor awareness-raising approach affects the involvement of the right hemisphere in developing EFL learners’ knowledge regarding the different degrees of politeness embedded within different request expressions. The present study was motivated by theoretical considerations regarding the conceptual projection and the metaphorical idea of politeness is distance, as proposed; this study applied these considerations to develop Japanese learners’ knowledge regarding the different politeness degrees and to explore the connection between the metaphorical concept projection and right-hemisphere dominance. Japanese EFL learners do not know certain language strategies (e.g., English requests can be mitigated with biclausal downgraders, including the if-clause with past-tense modal verbs) and have difficulty adjusting the politeness degrees attached to request expressions according to situations. The present study used a pre/post-test design to reaffirm the efficacy of the cognitive technique and its connection to right-hemisphere involvement by mouth asymmetry technique. Mouth asymmetry measurement has been utilized because speech articulation, normally controlled mainly by one side of the brain, causes muscles on the opposite side of the mouth to move more during speech production. The present research did not administer the delayed post-test because it emphasized determining whether metaphor awareness-raising approaches for developing EFL learners’ pragmatic proficiency entailed right-hemisphere activation. Each test contained an acceptability judgment test (AJT) along with a speaking test in the post-test. The study results show that the metaphor awareness-raising group performed significantly better than the control group with regard to acceptability judgment and speaking tests post-test. These data revealed that the metaphor awareness-raising approach could promote L2 learning because it aided input enhancement and concept projection; through these aspects, the participants were able to comprehend an abstract concept: the degree of politeness in terms of the spatial concept of distance. Accordingly, the proximal-distal metaphor enabled the study participants to connect the newly spatio-visualized concept of distance to the different politeness degrees attached to different request expressions; furthermore, they could recall them with the left side of the mouth being wider than the right. This supported certain findings from previous studies that indicated the possible involvement of the brain's right hemisphere in metaphor processing.Keywords: metaphor awareness-raising, right hemisphere, L2 politeness, mouth asymmetry
Procedia PDF Downloads 154425 Integration of the Electro-Activation Technology for Soy Meal Valorization
Authors: Natela Gerliani, Mohammed Aider
Abstract:
Nowadays, the interest of using sustainable technologies for protein extraction from underutilized oilseeds is growing. Currently, a major disposal problem for the oil industry is by-products of plant food processing such as soybean meal. That is why valorization of soybean meal is important for the oil industry since it contains high-quality proteins and other valuable components. Generally, soybean meal is used in livestock and poultry feed but is rarely used in human feed. Though chemical composition of this meal compensate nutritional deficiency and can be used to balance protein in human food. Regarding the efficiency of soybean meal valorization, extraction is a key process for obtaining enriched protein ingredient, which can be incorporated into the food matrix. However, most of the food components such as proteins extracted from oilseeds by-products imply the utilization of organic and inorganic chemicals (e.g. acids, bases, TCA-acetone) having a significant environmental impact. In a context of sustainable production, the use of an electro-activation technology seems to be a good alternative. Indeed, the electro-activation technology requires only water, food grade salt and electricity as main materials. Moreover, this innovative technology helps to avoid special equipment and trainings for workers safety as well as transport and storage of hazardous materials. Electro-activation is a technology based on applied electrochemistry for the generation of acidic and alkaline solutions on the basis of the oxidation-reduction reactions that occur at the vicinity electrode/solution interfaces. It is an eco-friendly process that can be used to replace the conventional acidic and alkaline extraction. In this research, the electro-activation technology for protein extraction from soybean meal was carried out in the electro-activation reactor. This reactor consists of three compartments separated by cation and anion exchange membranes that allow creating non-contacting acidic and basic solutions. Different current intensities (150 mA, 300 mA and 450 mA) and treatment durations (10 min, 30 min and 50 min) were tested. The results showed that the extracts obtained by the electro-activation method have good quality in comparison to conventional extracts. For instance, extractability obtained with electro-activation method was 55% whereas with the conventional method it was only 36%. Moreover, a maximum protein quantity of 48 % in the extract was obtained with the electro-activation technology comparing to the maximum amount of protein obtained by conventional extraction of 41 %. Hence, the environmentally sustainable electro-activation technology seems to be a promising type of protein extraction that can replace conventional extraction technology.Keywords: by-products, eco-friendly technology, electro-activation, soybean meal
Procedia PDF Downloads 228424 Facile Wick and Oil Flame Synthesis of High-Quality Hydrophilic Carbon Nano Onions for Flexible Binder-Free Supercapacitor
Authors: Debananda Mohapatra, Subramanya Badrayyana, Smrutiranjan Parida
Abstract:
Carbon nano-onions (CNOs) are the spherical graphitic nanostructures composed of concentric shells of graphitic carbon can be hypothesized as the intermediate state between fullerenes and graphite. These are very important members in fullerene family also known as the multi-shelled fullerenes can be envisioned as promising supercapacitor electrode with high energy & power density as they provide easy access to ions at electrode-electrolyte interface due to their curvature. There is still very sparse report concerning on CNOs as electrode despite having an excellent electrodechemical performance record due to their unavailability and lack of convenient methods for their high yield preparation and purification. Keeping all these current pressing issues in mind, we present a facile scalable and straightforward flame synthesis method of pure and highly dispersible CNOs without contaminated by any other forms of carbon; hence, a post processing purification procedure is not necessary. To the best of our knowledge, this is the very first time; we developed an extremely simple, light weight, novel inexpensive, flexible free standing pristine CNOs electrode without using any binder element. Locally available daily used cotton wipe has been used for fabrication of such an ideal electrode by ‘dipping and drying’ process providing outstanding stretchability and mechanical flexibility with strong adhesion between CNOs and porous wipe. The specific capacitance 102 F/g, energy density 3.5 Wh/kg and power density 1224 W/kg at 20 mV/s scan rate are the highest values that ever recorded and reported so far in symmetrical two electrode cell configuration with 1M Na2SO4 electrolyte; indicating a very good synthesis conditions employed with optimum pore size in agreement with electrolyte ion size. This free standing CNOs electrode also showed an excellent cyclic performance and stability retaining 95% original capacity after 5000 charge –discharge cycles. Furthermore, this unique method not only affords binder free - freestanding electrode but also provide a general way of fabricating such multifunctional promising CNOs based nanocomposites for their potential device applications in flexible solar cells and lithium-ion batteries.Keywords: binder-free, flame synthesis, flexible, carbon nano onion
Procedia PDF Downloads 204423 Changes in Chromatographically Assessed Fatty Acid Profile during Technology of Dairy Products
Authors: Lina Lauciene, Vaida Andruleviciute, Ingrida Sinkeviciene, Mindaugas Malakauskas, Loreta Serniene
Abstract:
Dairy product manufacturers constantly are looking for new markets for their production. And in most cases, the problem of product compliance with the composition requirements of foreign products is highlighted. This is especially true of the composition of milk fat in dairy products. It is well known that there are many factors such as feeding ratio, season, cow breed, stage of lactation that affect the fatty acid composition in milk. However, there is less evidence on the impact of the technological process on the composition of fatty acids in raw milk and products made from it. In this study the influence of the technological process on fat composition in 82% fat butter, 15% fat curd, 3.6% fat yogurt and 2.5% fat UHT milk was determined. The samples were collected at each stage of production, starting with raw milk and ending with the final product in the Lithuanian milk-processing company. Fatty acids methyl esters were quantified using a GC (Clarus 680, Perkin Elmer) equipped with flame ionization detector (FID) and a capillary column SP-2560, 100 m x 0.25 mm id x 0.20 µm. Fatty acids peaks were identified using Supelco® 37 Component FAME Mix. The concentration of each fatty acid was expressed in percent of the total fatty acid amount. In the case of UHT milk production, it was compared raw milk, cream, milk mixture, and UHT milk but significant differences were not estimated between these stages. Analyzing stages of the yogurt production (raw milk, pasteurized milk, and milk with a starter culture and yogurt), no significant changes were detected between stages as well. A slight difference was observed with C4:0 - a percentage of this fatty acid was less (p=0.053) in the final stage than in milk with the starter culture. During butter production, the composition of fatty acids in raw cream, buttermilk, and butter did not change significantly. Only C14:0 decreased in the butter then compared to buttermilk. The curd fatty acid analysis showed the increase of C6:0, C8:0, C10:0, C11:0, C12:0 C14:0 and C17:0 at the final stage when compared to raw milk, cream, milk mixture, and whey. Meantime the increase of C18:1n9c (in comparison with milk mixture and curd) and C18:2n6c (in comparison with raw milk, milk mixture, and curd) was estimated in cream. The results of this study suggest that the technological process did not affect the composition of fatty acids in UHT milk, yogurt, butter, and curd but had the impact on the concentration of individual fatty acids. In general, all of the fatty acids from the raw milk were converted into the final product, only some of them slightly changed the concentration. Therefore, in order to ensure an appropriate composition of certain fatty acids in the final product, producers must carefully choose the raw milk. Acknowledgment: This research was funded by Lithuanian Ministry of Agriculture (No. MT-17-13).Keywords: dairy products, fat composition, fatty acids, technological process
Procedia PDF Downloads 172422 Structural Equation Modeling Exploration for the Multiple College Admission Criteria in Taiwan
Authors: Tzu-Ling Hsieh
Abstract:
When the Taiwan Ministry of Education implemented a new university multiple entrance policy in 2002, most colleges and universities still use testing scores as mainly admission criteria. With forthcoming 12 basic-year education curriculum, the Ministry of Education provides a new college admission policy, which will be implemented in 2021. The new college admission policy will highlight the importance of holistic education by more emphases on the learning process of senior high school, except only on the outcome of academic testing. However, the development of college admission criteria doesn’t have a thoughtful process. Universities and colleges don’t have an idea about how to make suitable multi-admission criteria. Although there are lots of studies in other countries which have implemented multi-college admission criteria for years, these studies still cannot represent Taiwanese students. Also, these studies are limited without the comparison of two different academic fields. Therefore, this study investigated multiple admission criteria and its relationship with college success. This study analyzed the Taiwan Higher Education Database with 12,747 samples from 156 universities and tested a conceptual framework that examines factors by structural equation model (SEM). The conceptual framework of this study was adapted from Pascarella's general causal model and focused on how different admission criteria predict students’ college success. It discussed the relationship between admission criteria and college success, also the relationship how motivation (one of admission standard) influence college success through engagement behaviors of student effort and interactions with agents of socialization. After processing missing value, reliability and validity analysis, the study found three indicators can significantly predict students’ college success which was defined as average grade of last semester. These three indicators are the Chinese language scores at college entrance exam, high school class rank, and quality of student academic engagement. In addition, motivation can significantly predict quality of student academic engagement and interactions with agents of socialization. However, the multi-group SEM analysis showed that there is no difference to predict college success between the students from liberal arts and science. Finally, this study provided some suggestions for universities and colleges to develop multi-admission criteria through the empirical research of Taiwanese higher education students.Keywords: college admission, admission criteria, structural equation modeling, higher education, education policy
Procedia PDF Downloads 178421 Integrated Information Approach to Inbound Logistics in Indian Steel Sector
Authors: N. Jena, Nitin Seth
Abstract:
Globalization and free trade has forced the organizations to continuously rethink and rework on the increasing cost of logistics. World wide, it is visualized that on one side the steel sector is witnessing rapid growth and on the other side it is facing huge challenges in terms of availability of raw materials for uninterrupted production. Inbound logistics also gains significant importance for ensuring the timely availability of raw materials. It is seen that in Indian steel sector logistic cost is still very large and challenging. Effectively managing the inbound logistics in steel decides the profitability and serviceability of the organization. Effective management of inbound logistics also has a major role on the inventory of the organization. Since, the logistics for the steel industry in India is evolving rapidly and it is the interplay of infrastructure, technology and new types of service providers that will define whether the industry is able to help its customers to reduce their logistics costs. Integration of Logistics has been treated as one of the most potential area for the companies to provide a base for cost reduction. In spite of the proven area for benefits for the industry, it is very surprising that none of the researchers have explored this area. Although, many researchers explored the subject of logistics in steel industry, but their perspective varied from exploring and understanding the associated cost and finding out the relations between them. Visualizing a potential gap, the present research is under taken to explore the integration opportunities in inbound logistics for steel sector. Typically in Indian steel sector where in most of the manufacturers depend on imported materials for processing the logistics is very challenging and accounts for transactions at supplier – who is situated in different country, shipper- who is transporting the material to the host country, regulators in both countries-that include customs and various clearing agents, local logistics service providers and local transporters/handlers. It is seen that In bound logistics cost in the steel sector is very high and accounts for about 15-16% of the turn over, integration of information across different channels provides and opportunity for improvements and growth of the organization. In the present paper, a case of leading steel manufacturer has been taken and the potentials for integration of information across various partners have been identified. The paper provides the identification of grey area in steel sector for major improvements in cycle time and lowering the inventories by integration of information. Finally, based on integration of information, the paper presents a business information framework for steel sector.Keywords: integration, steel sectors, suppliers, shippers, customs and cargo agents, transporters
Procedia PDF Downloads 341420 Dual-Phase High Entropy (Ti₀.₂₅V₀.₂₅Zr₀.₂₅Hf₀.₂₅) BxCy Ceramics Produced by Spark Plasma Sintering
Authors: Ana-Carolina Feltrin, Daniel Hedman, Farid Akhtar
Abstract:
High entropy ceramic (HEC) materials are characterized by their compositional disorder due to different metallic element atoms occupying the cation position and non-metal elements occupying the anion position. Several studies have focused on the processing and characterization of high entropy carbides and high entropy borides, as these HECs present interesting mechanical and chemical properties. A few studies have been published on HECs containing two non-metallic elements in the composition. Dual-phase high entropy (Ti₀.₂₅V₀.₂₅Zr₀.₂₅Hf₀.₂₅)BxCy ceramics with different amounts of x and y, (0.25 HfC + 0.25 ZrC + 0.25 VC + 0.25 TiB₂), (0.25 HfC + 0.25 ZrC + 0.25 VB2 + 0.25 TiB₂) and (0.25 HfC + 0.25 ZrB2 + 0.25 VB2 + 0.25 TiB₂) were sintered from boride and carbide precursor powders using SPS at 2000°C with holding time of 10 min, uniaxial pressure of 50 MPa and under Ar atmosphere. The sintered specimens formed two HEC phases: a Zr-Hf rich FCC phase and a Ti-V HCP phase, and both phases contained all the metallic elements from 5-50 at%. Phase quantification analysis of XRD data revealed that the molar amount of hexagonal phase increased with increased mole fraction of borides in the starting powders, whereas cubic FCC phase increased with increased carbide in the starting powders. SPS consolidated (Ti₀.₂₅V₀.₂₅Zr₀.₂₅Hf₀.₂₅)BC0.5 and (Ti₀.₂₅V₀.₂₅Zr₀.₂₅Hf₀.₂₅)B1.5C0.25 had respectively 94.74% and 88.56% relative density. (Ti₀.₂₅V₀.₂₅Zr₀.₂₅Hf₀.₂₅)B0.5C0.75 presented the highest relative density of 95.99%, with Vickers hardness of 26.58±1.2 GPa for the borides phase and 18.29±0.8 GPa for the carbides phase, which exceeded the reported hardness values reported in the literature for high entropy ceramics. The SPS sintered specimens containing lower boron and higher carbon presented superior properties even though the metallic composition in each phase was similar to other compositions investigated. Dual-phase high entropy (Ti₀.₂₅V₀.₂₅Zr₀.₂₅H₀.₂₅)BxCy ceramics were successfully fabricated in a boride-carbide solid solution and the amount of boron and carbon was shown to influence the phase fraction, hardness of phases, and density of the consolidated HECs. The microstructure and phase formation was highly dependent on the amount of non-metallic elements in the composition and not only the molar ratio between metals when producing high entropy ceramics with more than one anion in the sublattice. These findings show the importance of further studies about the optimization of the ratio between C and B for further improvements in the properties of dual-phase high entropy ceramics.Keywords: high-entropy ceramics, borides, carbides, dual-phase
Procedia PDF Downloads 172419 Terrestrial Laser Scans to Assess Aerial LiDAR Data
Authors: J. F. Reinoso-Gordo, F. J. Ariza-López, A. Mozas-Calvache, J. L. García-Balboa, S. Eddargani
Abstract:
The DEMs quality may depend on several factors such as data source, capture method, processing type used to derive them, or the cell size of the DEM. The two most important capture methods to produce regional-sized DEMs are photogrammetry and LiDAR; DEMs covering entire countries have been obtained with these methods. The quality of these DEMs has traditionally been evaluated by the national cartographic agencies through punctual sampling that focused on its vertical component. For this type of evaluation there are standards such as NMAS and ASPRS Positional Accuracy Standards for Digital Geospatial Data. However, it seems more appropriate to carry out this evaluation by means of a method that takes into account the superficial nature of the DEM and, therefore, its sampling is superficial and not punctual. This work is part of the Research Project "Functional Quality of Digital Elevation Models in Engineering" where it is necessary to control the quality of a DEM whose data source is an experimental LiDAR flight with a density of 14 points per square meter to which we call Point Cloud Product (PCpro). In the present work it is described the capture data on the ground and the postprocessing tasks until getting the point cloud that will be used as reference (PCref) to evaluate the PCpro quality. Each PCref consists of a patch 50x50 m size coming from a registration of 4 different scan stations. The area studied was the Spanish region of Navarra that covers an area of 10,391 km2; 30 patches homogeneously distributed were necessary to sample the entire surface. The patches have been captured using a Leica BLK360 terrestrial laser scanner mounted on a pole that reached heights of up to 7 meters; the position of the scanner was inverted so that the characteristic shadow circle does not exist when the scanner is in direct position. To ensure that the accuracy of the PCref is greater than that of the PCpro, the georeferencing of the PCref has been carried out with real-time GNSS, and its accuracy positioning was better than 4 cm; this accuracy is much better than the altimetric mean square error estimated for the PCpro (<15 cm); The kind of DEM of interest is the corresponding to the bare earth, so that it was necessary to apply a filter to eliminate vegetation and auxiliary elements such as poles, tripods, etc. After the postprocessing tasks the PCref is ready to be compared with the PCpro using different techniques: cloud to cloud or after a resampling process DEM to DEM.Keywords: data quality, DEM, LiDAR, terrestrial laser scanner, accuracy
Procedia PDF Downloads 100418 Selective Separation of Amino Acids by Reactive Extraction with Di-(2-Ethylhexyl) Phosphoric Acid
Authors: Alexandra C. Blaga, Dan Caşcaval, Alexandra Tucaliuc, Madalina Poştaru, Anca I. Galaction
Abstract:
Amino acids are valuable chemical products used in in human foods, in animal feed additives and in the pharmaceutical field. Recently, there has been a noticeable rise of amino acids utilization throughout the world to include their use as raw materials in the production of various industrial chemicals: oil gelating agents (amino acid-based surfactants) to recover effluent oil in seas and rivers and poly(amino acids), which are attracting attention for biodegradable plastics manufacture. The amino acids can be obtained by biosynthesis or from protein hydrolysis, but their separation from the obtained mixtures can be challenging. In the last decades there has been a continuous interest in developing processes that will improve the selectivity and yield of downstream processing steps. The liquid-liquid extraction of amino acids (dissociated at any pH-value of the aqueous solutions) is possible only by using the reactive extraction technique, mainly with extractants of organophosphoric acid derivatives, high molecular weight amines and crown-ethers. The purpose of this study was to analyse the separation of nine amino acids of acidic character (l-aspartic acid, l-glutamic acid), basic character (l-histidine, l-lysine, l-arginine) and neutral character (l-glycine, l-tryptophan, l-cysteine, l-alanine) by reactive extraction with di-(2-ethylhexyl)phosphoric acid (D2EHPA) dissolved in butyl acetate. The results showed that the separation yield is controlled by the pH value of the aqueous phase: the reactive extraction of amino acids with D2EHPA is possible only if the amino acids exist in aqueous solution in their cationic forms (pH of aqueous phase below the isoeletric point). The studies for individual amino acids indicated the possibility of selectively separate different groups of amino acids with similar acidic properties as a function of aqueous solution pH-value: the maximum yields are reached for a pH domain of 2–3, then strongly decreasing with the pH increase. Thus, for acidic and neutral amino acids, the extraction becomes impossible at the isolelectric point (pHi) and for basic amino acids at a pH value lower than pHi, as a result of the carboxylic group dissociation. From the results obtained for the separation from the mixture of the nine amino acids, at different pH, it can be observed that all amino acids are extracted with different yields, for a pH domain of 1.5–3. Over this interval, the extract contains only the amino acids with neutral and basic character. For pH 5–6, only the neutral amino acids are extracted and for pH > 6 the extraction becomes impossible. Using this technique, the total separation of the following amino acids groups has been performed: neutral amino acids at pH 5–5.5, basic amino acids and l-cysteine at pH 4–4.5, l-histidine at pH 3–3.5 and acidic amino acids at pH 2–2.5.Keywords: amino acids, di-(2-ethylhexyl) phosphoric acid, reactive extraction, selective extraction
Procedia PDF Downloads 431417 Investigate the Competencies Required for Sustainable Entrepreneurship Development in Agricultural Higher Education
Authors: Ehsan Moradi, Parisa Paikhaste, Amir Alam Beigi, Seyedeh Somayeh Bathaei
Abstract:
The need for entrepreneurial sustainability is as important as the entrepreneurship category itself. By transferring competencies in a sustainable entrepreneurship framework, entrepreneurship education can make a significant contribution to the effectiveness of businesses, especially for start-up entrepreneurs. This study analyzes the essential competencies of students in the development of sustainable entrepreneurship. It is an applied causal study in terms of nature and field in terms of data collection. The main purpose of this research project is to study and explain the dimensions of sustainability entrepreneurship competencies among agricultural students. The statistical population consists of 730 junior and senior undergraduate students of the Campus of Agriculture and Natural Resources, University of Tehran. The sample size was determined to be 120 using the Cochran's formula, and the convenience sampling method was used. Face validity, structure validity, and diagnostic methods were used to evaluate the validity of the research tool and Cronbach's alpha and composite reliability to evaluate its reliability. Structural equation modeling (SEM) was used by the confirmatory factor analysis (CFA) method to prepare a measurement model for data processing. The results showed that seven key dimensions play a role in shaping sustainable entrepreneurial development competencies: systems thinking competence (STC), embracing diversity and interdisciplinary (EDI), foresighted thinking (FTC), normative competence (NC), action competence (AC), interpersonal competence (IC), and strategic management competence (SMC). It was found that acquiring skills in SMC by creating the ability to plan to achieve sustainable entrepreneurship in students through the relevant mechanisms can improve entrepreneurship in students by adopting a sustainability attitude. While increasing students' analytical ability in the field of social and environmental needs and challenges and emphasizing curriculum updates, AC should pay more attention to the relationship between the curriculum and its content in the form of entrepreneurship culture promotion programs. In the field of EDI, it was found that the success of entrepreneurs in terms of sustainability and business sustainability of start-up entrepreneurs depends on their interdisciplinary thinking. It was also found that STC plays an important role in explaining the relationship between sustainability and entrepreneurship. Therefore, focusing on these competencies in agricultural education to train start-up entrepreneurs can lead to sustainable entrepreneurship in the agricultural higher education system.Keywords: sustainable entrepreneurship, entrepreneurship education, competency, agricultural higher education
Procedia PDF Downloads 144416 Revealing Thermal Degradation Characteristics of Distinctive Oligo-and Polisaccharides of Prebiotic Relevance
Authors: Attila Kiss, Erzsébet Némedi, Zoltán Naár
Abstract:
As natural prebiotic (non-digestible) carbohydrates stimulate the growth of colon microflora and contribute to maintain the health of the host, analytical studies aiming at revealing the chemical behavior of these beneficial food components came to the forefront of interest. Food processing (especially baking) may lead to a significant conversion of the parent compounds, hence it is of utmost importance to characterize the transformation patterns and the plausible decomposition products formed by thermal degradation. The relevance of this work is confirmed by the wide-spread use of these carbohydrates (fructo-oligosaccharides, cyclodextrins, raffinose and resistant starch) in the food industry. More and more functional foodstuffs are being developed based on prebiotics as bioactive components. 12 different types of oligosaccharides have been investigated in order to reveal their thermal degradation characteristics. Different carbohydrate derivatives (D-fructose and D-glucose oligomers and polymers) have been exposed to elevated temperatures (150 °C 170 °C, 190 °C, 210 °C, and 220 °C) for 10 min. An advanced HPLC method was developed and used to identify the decomposition products of carbohydrates formed as a consequence of thermal treatment. Gradient elution was applied with binary solvent elution (acetonitrile, water) through amine based carbohydrate column. Evaporative light scattering (ELS) proved to be suitable for the reliable detection of the UV/VIS inactive carbohydrate degradation products. These experimental conditions and applied advanced techniques made it possible to survey all the formed intermediers. Change in oligomer distribution was established in cases of all studied prebiotics throughout the thermal treatments. The obtained results indicate increased extent of chain degradation of the carbohydrate moiety at elevated temperatures. Prevalence of oligomers with shorter chain length and even the formation of monomer sugars (D-glucose and D-fructose) might be observed at higher temperatures. Unique oligomer distributions, which have not been described previously are revealed in the case of each studied, specific carbohydrate, which might result in various prebiotic activities. Resistant starches exhibited high stability when being thermal treated. The degradation process has been modeled by a plausible reaction mechanism, in which proton catalyzed degradation and chain cleavage take place.Keywords: prebiotics, thermal degradation, fructo-oligosaccharide, HPLC, ELS detection
Procedia PDF Downloads 305415 Optimization for Autonomous Robotic Construction by Visual Guidance through Machine Learning
Authors: Yangzhi Li
Abstract:
Network transfer of information and performance customization is now a viable method of digital industrial production in the era of Industry 4.0. Robot platforms and network platforms have grown more important in digital design and construction. The pressing need for novel building techniques is driven by the growing labor scarcity problem and increased awareness of construction safety. Robotic approaches in construction research are regarded as an extension of operational and production tools. Several technological theories related to robot autonomous recognition, which include high-performance computing, physical system modeling, extensive sensor coordination, and dataset deep learning, have not been explored using intelligent construction. Relevant transdisciplinary theory and practice research still has specific gaps. Optimizing high-performance computing and autonomous recognition visual guidance technologies improves the robot's grasp of the scene and capacity for autonomous operation. Intelligent vision guidance technology for industrial robots has a serious issue with camera calibration, and the use of intelligent visual guiding and identification technologies for industrial robots in industrial production has strict accuracy requirements. It can be considered that visual recognition systems have challenges with precision issues. In such a situation, it will directly impact the effectiveness and standard of industrial production, necessitating a strengthening of the visual guiding study on positioning precision in recognition technology. To best facilitate the handling of complicated components, an approach for the visual recognition of parts utilizing machine learning algorithms is proposed. This study will identify the position of target components by detecting the information at the boundary and corner of a dense point cloud and determining the aspect ratio in accordance with the guidelines for the modularization of building components. To collect and use components, operational processing systems assign them to the same coordinate system based on their locations and postures. The RGB image's inclination detection and the depth image's verification will be used to determine the component's present posture. Finally, a virtual environment model for the robot's obstacle-avoidance route will be constructed using the point cloud information.Keywords: robotic construction, robotic assembly, visual guidance, machine learning
Procedia PDF Downloads 86414 The Influence of Morphology and Interface Treatment on Organic 6,13-bis (triisopropylsilylethynyl)-Pentacene Field-Effect Transistors
Authors: Daniel Bülz, Franziska Lüttich, Sreetama Banerjee, Georgeta Salvan, Dietrich R. T. Zahn
Abstract:
For the development of electronics, organic semiconductors are of great interest due to their adjustable optical and electrical properties. Especially for spintronic applications they are interesting because of their weak spin scattering, which leads to longer spin life times compared to inorganic semiconductors. It was shown that some organic materials change their resistance if an external magnetic field is applied. Pentacene is one of the materials which exhibit the so called photoinduced magnetoresistance which results in a modulation of photocurrent when varying the external magnetic field. Also the soluble derivate of pentacene, the 6,13-bis (triisopropylsilylethynyl)-pentacene (TIPS-pentacene) exhibits the same negative magnetoresistance. Aiming for simpler fabrication processes, in this work, we compare TIPS-pentacene organic field effect transistors (OFETs) made from solution with those fabricated by thermal evaporation. Because of the different processing, the TIPS-pentacene thin films exhibit different morphologies in terms of crystal size and homogeneity of the substrate coverage. On the other hand, the interface treatment is known to have a high influence on the threshold voltage, eliminating trap states of silicon oxide at the gate electrode and thereby changing the electrical switching response of the transistors. Therefore, we investigate the influence of interface treatment using octadecyltrichlorosilane (OTS) or using a simple cleaning procedure with acetone, ethanol, and deionized water. The transistors consist of a prestructured OFET substrates including gate, source, and drain electrodes, on top of which TIPS-pentacene dissolved in a mixture of tetralin and toluene is deposited by drop-, spray-, and spin-coating. Thereafter we keep the sample for one hour at a temperature of 60 °C. For the transistor fabrication by thermal evaporation the prestructured OFET substrates are also kept at a temperature of 60 °C during deposition with a rate of 0.3 nm/min and at a pressure below 10-6 mbar. The OFETs are characterized by means of optical microscopy in order to determine the overall quality of the sample, i.e. crystal size and coverage of the channel region. The output and transfer characteristics are measured in the dark and under illumination provided by a white light LED in the spectral range from 450 nm to 650 nm with a power density of (8±2) mW/cm2.Keywords: organic field effect transistors, solution processed, surface treatment, TIPS-pentacene
Procedia PDF Downloads 447413 Effect of Enzymatic Hydrolysis and Ultrasounds Pretreatments on Biogas Production from Corn Cob
Authors: N. Pérez-Rodríguez, D. García-Bernet, A. Torrado-Agrasar, J. M. Cruz, A. B. Moldes, J. M. Domínguez
Abstract:
World economy is based on non-renewable, fossil fuels such as petroleum and natural gas, which entails its rapid depletion and environmental problems. In EU countries, the objective is that at least 20% of the total energy supplies in 2020 should be derived from renewable resources. Biogas, a product of anaerobic degradation of organic substrates, represents an attractive green alternative for meeting partial energy needs. Nowadays, trend to circular economy model involves efficiently use of residues by its transformation from waste to a new resource. In this sense, characteristics of agricultural residues (that are available in plenty, renewable, as well as eco-friendly) propitiate their valorisation as substrates for biogas production. Corn cob is a by-product obtained from maize processing representing 18 % of total maize mass. Corn cob importance lies in the high production of this cereal (more than 1 x 109 tons in 2014). Due to its lignocellulosic nature, corn cob contains three main polymers: cellulose, hemicellulose and lignin. Crystalline, highly ordered structures of cellulose and lignin hinders microbial attack and subsequent biogas production. For the optimal lignocellulose utilization and to enhance gas production in anaerobic digestion, materials are usually submitted to different pretreatment technologies. In the present work, enzymatic hydrolysis, ultrasounds and combination of both technologies were assayed as pretreatments of corn cob for biogas production. Enzymatic hydrolysis pretreatment was started by adding 0.044 U of Ultraflo® L feruloyl esterase per gram of dry corncob. Hydrolyses were carried out in 50 mM sodium-phosphate buffer pH 6.0 with a solid:liquid proportion of 1:10 (w/v), at 150 rpm, 40 ºC and darkness for 3 hours. Ultrasounds pretreatment was performed subjecting corn cob, in 50 mM sodium-phosphate buffer pH 6.0 with a solid: liquid proportion of 1:10 (w/v), at a power of 750W for 1 minute. In order to observe the effect of the combination of both pretreatments, some samples were initially sonicated and then they were enzymatically hydrolysed. In terms of methane production, anaerobic digestion of the corn cob pretreated by enzymatic hydrolysis was positive achieving 290 L CH4 kg MV-1 (compared with 267 L CH4 kg MV-1 obtained with untreated corn cob). Although the use of ultrasound as the only pretreatment resulted detrimentally (since gas production decreased to 244 L CH4 kg MV-1 after 44 days of anaerobic digestion), its combination with enzymatic hydrolysis was beneficial, reaching the highest value (300.9 L CH4 kg MV-1). Consequently, the combination of both pretreatments improved biogas production from corn cob.Keywords: biogas, corn cob, enzymatic hydrolysis, ultrasound
Procedia PDF Downloads 267412 Bi-Component Particle Segregation Studies in a Spiral Concentrator Using Experimental and CFD Techniques
Authors: Prudhvinath Reddy Ankireddy, Narasimha Mangadoddy
Abstract:
Spiral concentrators are commonly used in various industries, including mineral and coal processing, to efficiently separate materials based on their density and size. In these concentrators, a mixture of solid particles and fluid (usually water) is introduced as feed at the top of a spiral channel. As the mixture flows down the spiral, centrifugal and gravitational forces act on the particles, causing them to stratify based on their density and size. Spiral flows exhibit complex fluid dynamics, and interactions involve multiple phases and components in the process. Understanding the behavior of these phases within the spiral concentrator is crucial for achieving efficient separation. An experimental bi-component particle interaction study is conducted in this work utilizing magnetite (heavier density) and silica (lighter density) with different proportions processed in the spiral concentrator. The observation separation reveals that denser particles accumulate towards the inner region of the spiral trough, while a significant concentration of lighter particles are found close to the outer edge. The 5th turn of the spiral trough is partitioned into five zones to achieve a comprehensive distribution analysis of bicomponent particle segregation. Samples are then gathered from these individual streams using an in-house sample collector, and subsequent analysis is conducted to assess component segregation. Along the trough, there was a decline in the concentration of coarser particles, accompanied by an increase in the concentration of lighter particles. The segregation pattern indicates that the heavier coarse component accumulates in the inner zone, whereas the lighter fine component collects in the outer zone. The middle zone primarily consists of heavier fine particles and lighter coarse particles. The zone-wise results reveal that there is a significant fraction of segregation occurs in inner and middle zones. Finer magnetite and silica particles predominantly accumulate in outer zones with the smallest fraction of segregation. Additionally, numerical simulations are also carried out using the computational fluid dynamics (CFD) model based on the volume of fluid (VOF) approach incorporating the RSM turbulence model. The discrete phase model (DPM) is employed for particle tracking, thereby understanding the particle segregation of magnetite and silica along the spiral trough.Keywords: spiral concentrator, bi-component particle segregation, computational fluid dynamics, discrete phase model
Procedia PDF Downloads 67411 Research on Tight Sandstone Oil Accumulation Process of the Third Member of Shahejie Formation in Dongpu Depression, China
Authors: Hui Li, Xiongqi Pang
Abstract:
In recent years, tight oil has become a hot spot for unconventional oil and gas exploration and development in the world. Dongpu Depression is a typical hydrocarbon-rich basin in the southwest of Bohai Bay Basin, in which tight sandstone oil and gas have been discovered in deep reservoirs, most of which are buried more than 3500m. The distribution and development characteristics of deep tight sandstone reservoirs need to be studied. The main source rocks in study area are dark mudstone and shale of the middle and lower third sub-member of Shahejie Formation. Total Organic Carbon (TOC) content of source rock is between 0.08-11.54%, generally higher than 0.6% and the value of S1+S2 is between 0.04–72.93 mg/g, generally higher than 2 mg/g. It can be evaluated as middle to fine level overall. The kerogen type of organic matter is predominantly typeⅡ1 andⅡ2. Vitrinite reflectance (Ro) is mostly greater than 0.6% indicating that the source rock entered the hydrocarbon generation threshold. The physical property of reservoir was poor, the most reservoir has a porosity lower than 12% and a permeability of less than 1×10⁻³μm. The rocks in this area showed great heterogeneity, some areas developed desserts with high porosity and permeability. According to SEM, thin section image, inclusion test and so on, the reservoir was affected by compaction and cementation during early diagenesis stage (44-31Ma). The diagenesis caused the tight reservoir in Huzhuangji, Pucheng, Weicheng Area while the porosity in Machang, Qiaokou, Wenliu Area was still over 12%. In the process of middle diagenesis phase stage A (31-17Ma), the reservoir porosity in Machang, Pucheng, Huzhuangji Area increased due to dissolution; after that the oil generation window of source rock was achieved for the first phase hydrocarbon charging (31-23Ma), formed the conventional oil deposition in Machang, Qiaokou, Wenliu, Huzhuangji Area and unconventional tight reservoir in Pucheng, Weicheng Area. Then came to stage B of middle diagenesis phase (17-7Ma), in this stage, the porosity of reservoir continued to decrease after the dissolution and led to a situation that the reservoirs were generally compacted. And since then, the second hydrocarbon filling has been processing since 7Ma. Most of the pools charged and formed in this procedure are tight sandstone oil reservoir. In conclusion, tight sandstone oil was formed in two patterns in Dongpu Depression, which could be concluded as ‘density fist then accumulation’ pattern and ‘accumulation fist next density’ pattern.Keywords: accumulation process, diagenesis, dongpu depression, tight sandstone oil
Procedia PDF Downloads 116410 Enhancing Tower Crane Safety: A UAV-based Intelligent Inspection Approach
Authors: Xin Jiao, Xin Zhang, Jian Fan, Zhenwei Cai, Yiming Xu
Abstract:
Tower cranes play a crucial role in the construction industry, facilitating the vertical and horizontal movement of materials and aiding in building construction, especially for high-rise structures. However, tower crane accidents can lead to severe consequences, highlighting the importance of effective safety management and inspection. This paper presents an innovative approach to tower crane inspection utilizing Unmanned Aerial Vehicles (UAVs) and an Intelligent Inspection APP System. The system leverages UAVs equipped with high-definition cameras to conduct efficient and comprehensive inspections, reducing manual labor, inspection time, and risk. By integrating advanced technologies such as Real-Time Kinematic (RTK) positioning and digital image processing, the system enables precise route planning and collection of safety hazards images. A case study conducted on a construction site demonstrates the practicality and effectiveness of the proposed method, showcasing its potential to enhance tower crane safety. On-site testing of UAV intelligent inspections reveals key findings: efficient tower crane hazard inspection within 30 minutes, with a full-identification capability coverage rates of 76.3%, 64.8%, and 76.2% for major, significant, and general hazards respectively and a preliminary-identification capability coverage rates of 18.5%, 27.2%, and 19%, respectively. Notably, UAVs effectively identify various tower crane hazards, except for those requiring auditory detection. The limitations of this study primarily involve two aspects: Firstly, during the initial inspection, manual drone piloting is required for marking tower crane points, followed by automated flight inspections and reuse based on the marked route. Secondly, images captured by the drone necessitate manual identification and review, which can be time-consuming for equipment management personnel, particularly when dealing with a large volume of images. Subsequent research efforts will focus on AI training and recognition of safety hazard images, as well as the automatic generation of inspection reports and corrective management based on recognition results. The ongoing development in this area is currently in progress, and outcomes will be released at an appropriate time.Keywords: tower crane, inspection, unmanned aerial vehicle (UAV), intelligent inspection app system, safety management
Procedia PDF Downloads 42409 CyberSteer: Cyber-Human Approach for Safely Shaping Autonomous Robotic Behavior to Comply with Human Intention
Authors: Vinicius G. Goecks, Gregory M. Gremillion, William D. Nothwang
Abstract:
Modern approaches to train intelligent agents rely on prolonged training sessions, high amounts of input data, and multiple interactions with the environment. This restricts the application of these learning algorithms in robotics and real-world applications, in which there is low tolerance to inadequate actions, interactions are expensive, and real-time processing and action are required. This paper addresses this issue introducing CyberSteer, a novel approach to efficiently design intrinsic reward functions based on human intention to guide deep reinforcement learning agents with no environment-dependent rewards. CyberSteer uses non-expert human operators for initial demonstration of a given task or desired behavior. The trajectories collected are used to train a behavior cloning deep neural network that asynchronously runs in the background and suggests actions to the deep reinforcement learning module. An intrinsic reward is computed based on the similarity between actions suggested and taken by the deep reinforcement learning algorithm commanding the agent. This intrinsic reward can also be reshaped through additional human demonstration or critique. This approach removes the need for environment-dependent or hand-engineered rewards while still being able to safely shape the behavior of autonomous robotic agents, in this case, based on human intention. CyberSteer is tested in a high-fidelity unmanned aerial vehicle simulation environment, the Microsoft AirSim. The simulated aerial robot performs collision avoidance through a clustered forest environment using forward-looking depth sensing and roll, pitch, and yaw references angle commands to the flight controller. This approach shows that the behavior of robotic systems can be shaped in a reduced amount of time when guided by a non-expert human, who is only aware of the high-level goals of the task. Decreasing the amount of training time required and increasing safety during training maneuvers will allow for faster deployment of intelligent robotic agents in dynamic real-world applications.Keywords: human-robot interaction, intelligent robots, robot learning, semisupervised learning, unmanned aerial vehicles
Procedia PDF Downloads 259408 Computational Fluid Dynamics Design and Analysis of Aerodynamic Drag Reduction Devices for a Mazda T3500 Truck
Authors: Basil Nkosilathi Dube, Wilson R. Nyemba, Panashe Mandevu
Abstract:
In highway driving, over 50 percent of the power produced by the engine is used to overcome aerodynamic drag, which is a force that opposes a body’s motion through the air. Aerodynamic drag and thus fuel consumption increase rapidly at speeds above 90kph. It is desirable to minimize fuel consumption. Aerodynamic drag reduction in highway driving is the best approach to minimize fuel consumption and to reduce the negative impacts of greenhouse gas emissions on the natural environment. Fuel economy is the ultimate concern of automotive development. This study aims to design and analyze drag-reducing devices for a Mazda T3500 truck, namely, the cab roof and rear (trailer tail) fairings. The aerodynamic effects of adding these append devices were subsequently investigated. To accomplish this, two 3D CAD models of the Mazda truck were designed using the Design Modeler. One, with these, append devices and the other without. The models were exported to ANSYS Fluent for computational fluid dynamics analysis, no wind tunnel tests were performed. A fine mesh with more than 10 million cells was applied in the discretization of the models. The realizable k-ε turbulence model with enhanced wall treatment was used to solve the Reynold’s Averaged Navier-Stokes (RANS) equation. In order to simulate the highway driving conditions, the tests were simulated with a speed of 100 km/h. The effects of these devices were also investigated for low-speed driving. The drag coefficients for both models were obtained from the numerical calculations. By adding the cab roof and rear (trailer tail) fairings, the simulations show a significant reduction in aerodynamic drag at a higher speed. The results show that the greatest drag reduction is obtained when both devices are used. Visuals from post-processing show that the rear fairing minimized the low-pressure region at the rear of the trailer when moving at highway speed. The rear fairing achieved this by streamlining the turbulent airflow, thereby delaying airflow separation. For lower speeds, there were no significant differences in drag coefficients for both models (original and modified). The results show that these devices can be adopted for improving the aerodynamic efficiency of the Mazda T3500 truck at highway speeds.Keywords: aerodynamic drag, computation fluid dynamics, fluent, fuel consumption
Procedia PDF Downloads 138