Search results for: parallel processing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4760

Search results for: parallel processing

590 Transition in Protein Profile, Maillard Reaction Products and Lipid Oxidation of Flavored Ultra High Temperature Treated Milk

Authors: Muhammad Ajmal

Abstract:

- Thermal processing and subsequent storage of ultra-heat treated (UHT) milk leads to alteration in protein profile, Maillard reaction and lipid oxidation. Concentration of carbohydrates in normal and flavored version of UHT milk is considerably different. Transition in protein profile, Maillard reaction and lipid oxidation in UHT flavored milk was determined for 90 days at ambient conditions and analyzed at 0, 45 and 90 days of storage. Protein profile, hydroxymethyl furfural, furosine, Nε-carboxymethyl-l-lysine, fatty acid profile, free fatty acids, peroxide value and sensory characteristics were determined. After 90 days of storage, fat, protein, total solids contents and pH were significantly less than the initial values determined at 0 day. As compared to protein profile normal UHT milk, more pronounced changes were recorded in different fractions of protein in UHT milk at 45 and 90 days of storage. Tyrosine content of flavored UHT milk at 0, 45 and 90 days of storage were 3.5, 6.9 and 15.2 µg tyrosine/ml. After 45 days of storage, the decline in αs1-casein, αs2-casein, β-casein, κ-casein, β-lactoglobulin, α-lactalbumin, immunoglobulin and bovine serum albumin were 3.35%, 10.5%, 7.89%, 18.8%, 53.6%, 20.1%, 26.9 and 37.5%. After 90 days of storage, the decline in αs1-casein, αs2-casein, β-casein, κ-casein, β-lactoglobulin, α-lactalbumin, immunoglobulin and bovine serum albumin were 11.2%, 34.8%, 14.3%, 33.9%, 56.9%, 24.8%, 36.5% and 43.1%. Hydroxy methyl furfural content of UHT milk at 0, 45 and 90 days of storage were 1.56, 4.18 and 7.61 (µmol/L). Furosine content of flavored UHT milk at 0, 45 and 90 days of storage intervals were 278, 392 and 561 mg/100g protein. Nε-carboxymethyl-l-lysine content of UHT flavored milk at 0, 45 and 90 days of storage were 67, 135 and 343mg/kg protein. After 90 days of storage of flavored UHT milk, the loss of unsaturated fatty acids 45.7% from the initial values. At 0, 45 and 90 days of storage, free fatty acids of flavored UHT milk were 0.08%, 0.11% and 0.16% (p<0.05). Peroxide value of flavored UHT milk at 0, 45 and 90 days of storage was 0.22, 0.65 and 2.88 (MeqO²/kg). Sensory analysis of flavored UHT milk after 90 days indicated that appearance, flavor and mouth feel score significantly decreased from the initial values recorded at 0 day. Findings of this investigation evidenced that in flavored UHT milk more pronounced changes take place in protein profile, Maillard reaction products and lipid oxidation as compared to normal UHT milk.

Keywords: UHT flavored milk , hydroxymethyl furfural, lipid oxidation, sensory properties

Procedia PDF Downloads 199
589 Humans’ Physical Strength Capacities on Different Handwheel Diameters and Angles

Authors: Saif K. Al-Qaisi, Jad R. Mansour, Aseel W. Sakka, Yousef Al-Abdallat

Abstract:

Handwheels are common to numerous industries, such as power generation plants, oil refineries, and chemical processing plants. The forces required to manually turn handwheels have been shown to exceed operators’ physical strengths, posing risks for injuries. Therefore, the objectives of this research were twofold: (1) to determine humans’ physical strengths on handwheels of different sizes and angles and (2) to subsequently propose recommended torque limits (RTLs) that accommodate the strengths of even the weaker segment of the population. Thirty male and thirty female participants were recruited from a university student population. Participants were asked to exert their maximum possible forces in a counter-clockwise direction on handwheels of different sizes (35 cm, 45 cm, 60 cm, and 70 cm) and angles (0°-horizontal, 45°-slanted, and 90°-vertical). The participant’s posture was controlled by adjusting the handwheel to be at the elbow level of each participant, requiring the participant to stand erect, and restricting the hand placements to be in the 10-11 o’clock position for the left hand and the 4-5 o’clock position for the right hand. A torque transducer (Futek TDF600) was used to measure the maximum torques generated by the human. Three repetitions were performed for each handwheel condition, and the average was computed. Results showed that, at all handwheel angles, as the handwheel diameter increased, the maximum torques generated also increased, while the underlying forces decreased. In controlling the handwheel diameter, the 0° handwheel was associated with the largest torques and forces, and the 45° handwheel was associated with the lowest torques and forces. Hence, a larger handwheel diameter –as large as 70 cm– in a 0° angle is favored for increasing the torque production capacities of users. Also, it was recognized that, regardless of the handwheel diameter size and angle, the torque demands in the field are much greater than humans’ torque production capabilities. As such, this research proposed RTLs for the different handwheel conditions by using the 25th percentile values of the females’ torque strengths. The proposed recommendations may serve future standard developers in defining torque limits that accommodate humans’ strengths.

Keywords: handwheel angle, handwheel diameter, humans’ torque production strengths, recommended torque limits

Procedia PDF Downloads 112
588 Cost-Effective Mechatronic Gaming Device for Post-Stroke Hand Rehabilitation

Authors: A. Raj Kumar, S. Bilaloglu

Abstract:

Stroke is a leading cause of adult disability worldwide. We depend on our hands for our activities of daily living(ADL). Although many patients regain the ability to walk, they continue to experience long-term hand motor impairments. As the number of individuals with young stroke is increasing, there is a critical need for effective approaches for rehabilitation of hand function post-stroke. Motor relearning for dexterity requires task-specific kinesthetic, tactile and visual feedback. However, when a stroke results in both sensory and motor impairment, it becomes difficult to ascertain when and what type of sensory substitutions can facilitate motor relearning. In an ideal situation, real-time task-specific data on the ability to learn and data-driven feedback to assist such learning will greatly assist rehabilitation for dexterity. We have found that kinesthetic and tactile information from the unaffected hand can assist patients re-learn the use of optimal fingertip forces during a grasp and lift task. Measurement of fingertip grip force (GF), load forces (LF), their corresponding rates (GFR and LFR), and other metrics can be used to gauge the impairment level and progress during learning. Currently ATI mini force-torque sensors are used in research settings to measure and compute the LF, GF, and their rates while grasping objects of different weights and textures. Use of the ATI sensor is cost prohibitive for deployment in clinical or at-home rehabilitation. A cost effective mechatronic device is developed to quantify GF, LF, and their rates for stroke rehabilitation purposes using off-the-shelf components such as load cells, flexi-force sensors, and an Arduino UNO microcontroller. A salient feature of the device is its integration with an interactive gaming environment to render a highly engaging user experience. This paper elaborates the integration of kinesthetic and tactile sensing through computation of LF, GF and their corresponding rates in real time, information processing, and interactive interfacing through augmented reality for visual feedback.

Keywords: feedback, gaming, kinesthetic, rehabilitation, tactile

Procedia PDF Downloads 240
587 Human Health Risk Assessment from Metals Present in a Soil Contaminated by Crude Oil

Authors: M. A. Stoian, D. M. Cocarta, A. Badea

Abstract:

The main sources of soil pollution due to petroleum contaminants are industrial processes involve crude oil. Soil polluted with crude oil is toxic for plants, animals, and humans. Human exposure to the contaminated soil occurs through different exposure pathways: Soil ingestion, diet, inhalation, and dermal contact. The present study research is focused on soil contamination with heavy metals as a consequence of soil pollution with petroleum products. Human exposure pathways considered are: Accidentally ingestion of contaminated soil and dermal contact. The purpose of the paper is to identify the human health risk (carcinogenic risk) from soil contaminated with heavy metals. The human exposure and risk were evaluated for five contaminants of concern of the eleven which were identified in soil. Two soil samples were collected from a bioremediation platform from Muntenia Region of Romania. The soil deposited on the bioremediation platform was contaminated through extraction and oil processing. For the research work, two average soil samples from two different plots were analyzed: The first one was slightly contaminated with petroleum products (Total Petroleum Hydrocarbons (TPH) in soil was 1420 mg/kgd.w.), while the second one was highly contaminated (TPH in soil was 24306 mg/kgd.w.). In order to evaluate risks posed by heavy metals due soil pollution with petroleum products, five metals known as carcinogenic were investigated: Arsenic (As), Cadmium (Cd), ChromiumVI (CrVI), Nickel (Ni), and Lead (Pb). Results of the chemical analysis performed on samples collected from the contaminated soil evidence soil contamination with heavy metals as following: As in Site 1 = 6.96 mg/kgd.w; As in Site 2 = 11.62 mg/kgd.w, Cd in Site 1 = 0.9 mg/kgd.w; Cd in Site 2 = 1 mg/kgd.w; CrVI was 0.1 mg/kgd.w for both sites; Ni in Site 1 = 37.00 mg/kgd.w; Ni in Site 2 = 42.46 mg/kgd.w; Pb in Site 1 = 34.67 mg/kgd.w; Pb in Site 2 = 120.44 mg/kgd.w. The concentrations for these metals exceed the normal values established in the Romanian regulation, but are smaller than the alert level for a less sensitive use of soil (industrial). Although, the concentrations do not exceed the thresholds, the next step was to assess the human health risk posed by soil contamination with these heavy metals. Results for risk were compared with the acceptable one (10-6, according to World Human Organization). As, expected, the highest risk was identified for the soil with a higher degree of contamination: Individual Risk (IR) was 1.11×10-5 compared with 8.61×10-6

Keywords: carcinogenic risk, heavy metals, human health risk assessment, soil pollution

Procedia PDF Downloads 422
586 Metabolically Healthy Obesity and Protective Factors of Cardiovascular Diseases as a Result from a Longitudinal Study in Tebessa (East of Algeria)

Authors: Salima Taleb, Kafila Boulaba, Ahlem Yousfi, Nada Taleb, Difallah Basma

Abstract:

Introduction: Obesity is recognized as a cardiovascular risk factor. It is associated with cardio-metabolic diseases. Its prevalence is increasing significantly in both rich and poor countries. However, there are obese people who have no metabolic disturbance. So we think obesity is not always a risk factor for an abnormal metabolic profile that increases the risk of cardiometabolic problems. However, there is no definition that allows us to identify the individual group Metabolically Healthy but Obese (MHO). Objective: The objective of this study is to evaluate the relationship between MHO and some factors associated with it. Methods: A longitudinal study is a prospective cohort study of 600 participants aged ≥18 years. Metabolic status was assessed by the following parameters: blood pressure, fasting glucose, total cholesterol, HDL cholesterol, LDL cholesterol, and triglycerides. Body Mass Index (BMI) was calculated as weight (in kg) divided by height (m2), BMI = Weight/(Height)². According to the BMI value, our population was divided into four groups: underweight subjects with BMI <18.5 kg/m2, normal weight subjects with BMI = 18.5–24.9 kg/m², overweight subjects with BMI=25–29.9 kg/m², and obese subjects who have (BMI ≥ 30 kg/m²). A value of P < 0.05 was considered significant. Statistical processing was done using the SPSS 25 software. Results: During this study, 194 (32.33%) were identified as MHO among 416 (37%) obese individuals. The prevalence of the metabolically unhealthy phenotype among normal-weight individuals was (13.83%) vs. (37%) in obese individuals. Compared with metabolically healthy normal-weight individuals (10.93%), the prevalence of diabetes was (30.60%) in MHO, (20.59%) in metabolically unhealthy normal weight, and (52.29%) for metabolically unhealthy obese (p = 0.032). Blood pressure was significantly higher in MHO individuals than in metabolically healthy normal-weight individuals and in metabolically unhealthy obese than in metabolically unhealthy normal weight (P < 0.0001). Familial coronary artery disease does not appear to have an effect on the metabolic status of obese and normal-weight patients (P = 0.544). However, waist circumference appears to have an effect on the metabolic status of individuals (P < 0.0001). Conclusion: This study showed a high prevalence of metabolic profile disruption in normal-weight subjects and a high rate of overweight and/or obese people who are metabolically healthy. To understand the physiological mechanism related to these metabolic statuses, a thorough study is needed.

Keywords: metabolically health, obesity, factors associated, cardiovascular diseases

Procedia PDF Downloads 117
585 A Method for Clinical Concept Extraction from Medical Text

Authors: Moshe Wasserblat, Jonathan Mamou, Oren Pereg

Abstract:

Natural Language Processing (NLP) has made a major leap in the last few years, in practical integration into medical solutions; for example, extracting clinical concepts from medical texts such as medical condition, medication, treatment, and symptoms. However, training and deploying those models in real environments still demands a large amount of annotated data and NLP/Machine Learning (ML) expertise, which makes this process costly and time-consuming. We present a practical and efficient method for clinical concept extraction that does not require costly labeled data nor ML expertise. The method includes three steps: Step 1- the user injects a large in-domain text corpus (e.g., PubMed). Then, the system builds a contextual model containing vector representations of concepts in the corpus, in an unsupervised manner (e.g., Phrase2Vec). Step 2- the user provides a seed set of terms representing a specific medical concept (e.g., for the concept of the symptoms, the user may provide: ‘dry mouth,’ ‘itchy skin,’ and ‘blurred vision’). Then, the system matches the seed set against the contextual model and extracts the most semantically similar terms (e.g., additional symptoms). The result is a complete set of terms related to the medical concept. Step 3 –in production, there is a need to extract medical concepts from the unseen medical text. The system extracts key-phrases from the new text, then matches them against the complete set of terms from step 2, and the most semantically similar will be annotated with the same medical concept category. As an example, the seed symptom concepts would result in the following annotation: “The patient complaints on fatigue [symptom], dry skin [symptom], and Weight loss [symptom], which can be an early sign for Diabetes.” Our evaluations show promising results for extracting concepts from medical corpora. The method allows medical analysts to easily and efficiently build taxonomies (in step 2) representing their domain-specific concepts, and automatically annotate a large number of texts (in step 3) for classification/summarization of medical reports.

Keywords: clinical concepts, concept expansion, medical records annotation, medical records summarization

Procedia PDF Downloads 135
584 Experimental and Numerical Investigation of Micro-Welding Process and Applications in Digital Manufacturing

Authors: Khaled Al-Badani, Andrew Norbury, Essam Elmshawet, Glynn Rotwell, Ian Jenkinson , James Ren

Abstract:

Micro welding procedures are widely used for joining materials, developing duplex components or functional surfaces, through various methods such as Micro Discharge Welding or Spot Welding process, which can be found in the engineering, aerospace, automotive, biochemical, biomedical and numerous other industries. The relationship between the material properties, structure and processing is very important to improve the structural integrity and the final performance of the welded joints. This includes controlling the shape and the size of the welding nugget, state of the heat affected zone, residual stress, etc. Nowadays, modern high volume productions require the welding of much versatile shapes/sizes and material systems that are suitable for various applications. Hence, an improved understanding of the micro welding process and the digital tools, which are based on computational numerical modelling linking key welding parameters, dimensional attributes and functional performance of the weldment, would directly benefit the industry in developing products that meet current and future market demands. This paper will introduce recent work on developing an integrated experimental and numerical modelling code for micro welding techniques. This includes similar and dissimilar materials for both ferrous and non-ferrous metals, at different scales. The paper will also produce a comparative study, concerning the differences between the micro discharge welding process and the spot welding technique, in regards to the size effect of the welding zone and the changes in the material structure. Numerical modelling method for the micro welding processes and its effects on the material properties, during melting and cooling progression at different scales, will also be presented. Finally, the applications of the integrated numerical modelling and the material development for the digital manufacturing of welding, is discussed with references to typical application cases such as sensors (thermocouples), energy (heat exchanger) and automotive structures (duplex steel structures).

Keywords: computer modelling, droplet formation, material distortion, materials forming, welding

Procedia PDF Downloads 255
583 Quantum Information Scrambling and Quantum Chaos in Silicon-Based Fermi-Hubbard Quantum Dot Arrays

Authors: Nikolaos Petropoulos, Elena Blokhina, Andrii Sokolov, Andrii Semenov, Panagiotis Giounanlis, Xutong Wu, Dmytro Mishagli, Eugene Koskin, Robert Bogdan Staszewski, Dirk Leipold

Abstract:

We investigate entanglement and quantum information scrambling (QIS) by the example of a many-body Extended and spinless effective Fermi-Hubbard Model (EFHM and e-FHM, respectively) that describes a special type of quantum dot array provided by Equal1 labs silicon-based quantum computer. The concept of QIS is used in the framework of quantum information processing by quantum circuits and quantum channels. In general, QIS is manifest as the de-localization of quantum information over the entire quantum system; more compactly, information about the input cannot be obtained by local measurements of the output of the quantum system. In our work, we will first make an introduction to the concept of quantum information scrambling and its connection with the 4-point out-of-time-order (OTO) correlators. In order to have a quantitative measure of QIS we use the tripartite mutual information, in similar lines to previous works, that measures the mutual information between 4 different spacetime partitions of the system and study the Transverse Field Ising (TFI) model; this is used to quantify the dynamical spreading of quantum entanglement and information in the system. Then, we investigate scrambling in the quantum many-body Extended Hubbard Model with external magnetic field Bz and spin-spin coupling J for both uniform and thermal quantum channel inputs and show that it scrambles for specific external tuning parameters (e.g., tunneling amplitudes, on-site potentials, magnetic field). In addition, we compare different Hilbert space sizes (different number of qubits) and show the qualitative and quantitative differences in quantum scrambling as we increase the number of quantum degrees of freedom in the system. Moreover, we find a "scrambling phase transition" for a threshold temperature in the thermal case, that is, the temperature of the model that the channel starts to scramble quantum information. Finally, we make comparisons to the TFI model and highlight the key physical differences between the two systems and mention some future directions of research.

Keywords: condensed matter physics, quantum computing, quantum information theory, quantum physics

Procedia PDF Downloads 99
582 Suitable Site Selection of Small Dams Using Geo-Spatial Technique: A Case Study of Dadu Tehsil, Sindh

Authors: Zahid Khalil, Saad Ul Haque, Asif Khan

Abstract:

Decision making about identifying suitable sites for any project by considering different parameters is difficult. Using GIS and Multi-Criteria Analysis (MCA) can make it easy for those projects. This technology has proved to be an efficient and adequate in acquiring the desired information. In this study, GIS and MCA were employed to identify the suitable sites for small dams in Dadu Tehsil, Sindh. The GIS software is used to create all the spatial parameters for the analysis. The parameters that derived are slope, drainage density, rainfall, land use / land cover, soil groups, Curve Number (CN) and runoff index with a spatial resolution of 30m. The data used for deriving above layers include 30-meter resolution SRTM DEM, Landsat 8 imagery, and rainfall from National Centre of Environment Prediction (NCEP) and soil data from World Harmonized Soil Data (WHSD). Land use/Land cover map is derived from Landsat 8 using supervised classification. Slope, drainage network and watershed are delineated by terrain processing of DEM. The Soil Conservation Services (SCS) method is implemented to estimate the surface runoff from the rainfall. Prior to this, SCS-CN grid is developed by integrating the soil and land use/land cover raster. These layers with some technical and ecological constraints are assigned weights on the basis of suitability criteria. The pairwise comparison method, also known as Analytical Hierarchy Process (AHP) is taken into account as MCA for assigning weights on each decision element. All the parameters and group of parameters are integrated using weighted overlay in GIS environment to produce suitable sites for the Dams. The resultant layer is then classified into four classes namely, best suitable, suitable, moderate and less suitable. This study reveals a contribution to decision-making about suitable sites analysis for small dams using geospatial data with minimal amount of ground data. This suitability maps can be helpful for water resource management organizations in determination of feasible rainwater harvesting structures (RWH).

Keywords: Remote sensing, GIS, AHP, RWH

Procedia PDF Downloads 389
581 Astronomical Object Classification

Authors: Alina Muradyan, Lina Babayan, Arsen Nanyan, Gohar Galstyan, Vigen Khachatryan

Abstract:

We present a photometric method for identifying stars, galaxies and quasars in multi-color surveys, which uses a library of ∼> 65000 color templates for comparison with observed objects. The method aims for extracting the information content of object colors in a statistically correct way, and performs a classification as well as a redshift estimation for galaxies and quasars in a unified approach based on the same probability density functions. For the redshift estimation, we employ an advanced version of the Minimum Error Variance estimator which determines the redshift error from the redshift dependent probability density function itself. The method was originally developed for the Calar Alto Deep Imaging Survey (CADIS), but is now used in a wide variety of survey projects. We checked its performance by spectroscopy of CADIS objects, where the method provides high reliability (6 errors among 151 objects with R < 24), especially for the quasar selection, and redshifts accurate within σz ≈ 0.03 for galaxies and σz ≈ 0.1 for quasars. For an optimization of future survey efforts, a few model surveys are compared, which are designed to use the same total amount of telescope time but different sets of broad-band and medium-band filters. Their performance is investigated by Monte-Carlo simulations as well as by analytic evaluation in terms of classification and redshift estimation. If photon noise were the only error source, broad-band surveys and medium-band surveys should perform equally well, as long as they provide the same spectral coverage. In practice, medium-band surveys show superior performance due to their higher tolerance for calibration errors and cosmic variance. Finally, we discuss the relevance of color calibration and derive important conclusions for the issues of library design and choice of filters. The calibration accuracy poses strong constraints on an accurate classification, which are most critical for surveys with few, broad and deeply exposed filters, but less severe for surveys with many, narrow and less deep filters.

Keywords: VO, ArVO, DFBS, FITS, image processing, data analysis

Procedia PDF Downloads 80
580 Neuroevolution Based on Adaptive Ensembles of Biologically Inspired Optimization Algorithms Applied for Modeling a Chemical Engineering Process

Authors: Sabina-Adriana Floria, Marius Gavrilescu, Florin Leon, Silvia Curteanu, Costel Anton

Abstract:

Neuroevolution is a subfield of artificial intelligence used to solve various problems in different application areas. Specifically, neuroevolution is a technique that applies biologically inspired methods to generate neural network architectures and optimize their parameters automatically. In this paper, we use different biologically inspired optimization algorithms in an ensemble strategy with the aim of training multilayer perceptron neural networks, resulting in regression models used to simulate the industrial chemical process of obtaining bricks from silicone-based materials. Installations in the raw ceramics industry, i.e., bricks, are characterized by significant energy consumption and large quantities of emissions. In addition, the initial conditions that were taken into account during the design and commissioning of the installation can change over time, which leads to the need to add new mixes to adjust the operating conditions for the desired purpose, e.g., material properties and energy saving. The present approach follows the study by simulation of a process of obtaining bricks from silicone-based materials, i.e., the modeling and optimization of the process. Optimization aims to determine the working conditions that minimize the emissions represented by nitrogen monoxide. We first use a search procedure to find the best values for the parameters of various biologically inspired optimization algorithms. Then, we propose an adaptive ensemble strategy that uses only a subset of the best algorithms identified in the search stage. The adaptive ensemble strategy combines the results of selected algorithms and automatically assigns more processing capacity to the more efficient algorithms. Their efficiency may also vary at different stages of the optimization process. In a given ensemble iteration, the most efficient algorithms aim to maintain good convergence, while the less efficient algorithms can improve population diversity. The proposed adaptive ensemble strategy outperforms the individual optimizers and the non-adaptive ensemble strategy in convergence speed, and the obtained results provide lower error values.

Keywords: optimization, biologically inspired algorithm, neuroevolution, ensembles, bricks, emission minimization

Procedia PDF Downloads 116
579 Exploring Polypnenolics Content and Antioxidant Activity of R. damascena Dry Extract by Spectroscopic and Chromatographic Techniques

Authors: Daniela Nedeltcheva-Antonova, Kamelia Getchovska, Vera Deneva, Stanislav Bozhanov, Liudmil Antonov

Abstract:

Rosa damascena Mill. (Damask rose) is one of the most important plants belonging to the Rosaceae family, with a long historical use in traditional medicine and as a valuable oil-bearing plant. Many pharmacological effects have been reported from this plant, including anti-inflammatory, hypnotic, analgesic, anticonvulsant, anti-depressant, antianxiety, antitussive, antidiabetic, relaxant effects on tracheal chains, laxative, prokinetic and hepatoprotective activities. Pharmacological studies have shown that the various health effects of R. damascena flowers can mainly be attributed to its large amount of polyphenolic components. Phenolics possess a wide range of pharmacological activities, such as antioxidants, free-radical scavengers, anticancer, anti-inflammatory, antimutagenic, and antidepressant, with flavonoids being the most numerous group of natural polyphenolic compounds. According to the technological process in the production of rose concrete (solvent extraction with non-polar solvents of fresh rose flowers), it can be assumed that the resulting plant residue would be as rich of polyphenolics, as the plant itself, and could be used for the development of novel products with promising health-promoting effect. Therefore, an optimisation of the extraction procedure of the by-product from the rose concrete production was carried out. An assay of the extracts in respect of their total polyphenols and total flavonoids content was performed. HPLC analysis of quercetin and kaempferol, the two main flavonoids found in R. damascena, was also carried out. The preliminary results have shown that the flavonoid content in the rose extracts is comparable to that of the green tea or Gingko biloba, and they could be used for the development of various products (food supplements, natural cosmetics and phyto-pharmaceutical formulation, etc.). The fact that they are derived from the by-product of industrial plant processing could add the marketing value of the final products in addition to the well-known reputation of the products obtained from Bulgarian roses (R. damascena Mill.).

Keywords: gas chromatography-mass-spectromrtry, dry extract, flavonoids, Rosa damascena Mill

Procedia PDF Downloads 152
578 Effect of Perceived Importance of a Task in the Prospective Memory Task

Authors: Kazushige Wada, Mayuko Ueda

Abstract:

In the present study, we reanalyzed lapse errors in the last phase of a job, by re-counting near lapse errors and increasing the number of participants. We also examined the results of this study from the perspective of prospective memory (PM), which concerns future actions. This study was designed to investigate whether perceiving the importance of PM tasks caused lapse errors in the last phase of a job and to determine if such errors could be explained from the perspective of PM processing. Participants (N = 34) conducted a computerized clicking task, in which they clicked on 10 figures that they had learned in advance in 8 blocks of 10 trials. Participants were requested to click the check box in the start display of a block and to click the checking off box in the finishing display. This task was a PM task. As a measure of PM performance, we counted the number of omission errors caused by forgetting to check off in the finishing display, which was defined as a lapse error. The perceived importance was manipulated by different instructions. Half the participants in the highly important task condition were instructed that checking off was very important, because equipment would be overloaded if it were not done. The other half in the not important task condition was instructed only about the location and procedure for checking off. Furthermore, we controlled workload and the emotion of surprise to confirm the effect of demand capacity and attention. To manipulate emotions during the clicking task, we suddenly presented a photo of a traffic accident and the sound of a skidding car followed by an explosion. Workload was manipulated by requesting participants to press the 0 key in response to a beep. Results indicated too few forgetting induced lapse errors to be analyzed. However, there was a weak main effect of the perceived importance of the check task, in which the mouse moved to the “END” button before moving to the check box in the finishing display. Especially, the highly important task group showed more such near lapse errors, than the not important task group. Neither surprise, nor workload affected the occurrence of near lapse errors. These results imply that high perceived importance of PM tasks impair task performance. On the basis of the multiprocess framework of PM theory, we have suggested that PM task performance in this experiment relied not on monitoring PM tasks, but on spontaneous retrieving.

Keywords: prospective memory, perceived importance, lapse errors, multi process framework of prospective memory.

Procedia PDF Downloads 446
577 Effects of Sensory Integration Techniques in Science Education of Autistic Students

Authors: Joanna Estkowska

Abstract:

Sensory integration methods are very useful and improve daily functioning autistic and mentally disabled children. Autism is a neurobiological disorder that impairs one's ability to communicate with and relate to others as well as their sensory system. Children with autism, even highly functioning kids, can find it difficult to process language with surrounding noise or smells. They are hypersensitive to things we can ignore such as sight, sounds and touch. Adolescents with highly functioning autism or Asperger Syndrome can study Science and Math but the social aspect is difficult for them. Nature science is an area of study that attracts many of these kids. It is a systematic field in which the children can focus on a small aspect. If you follow these rules you can come up with an expected result. Sensory integration program and systematic classroom observation are quantitative methods of measuring classroom functioning and behaviors from direct observations. These methods specify both the events and behaviors that are to be observed and how they are to be recorded. Our students with and without autism attended the lessons in the classroom of nature science in the school and in the laboratory of University of Science and Technology in Bydgoszcz. The aim of this study is investigation the effects of sensory integration methods in teaching to students with autism. They were observed during experimental lessons in the classroom and in the laboratory. Their physical characteristics, sensory dysfunction, and behavior in class were taken into consideration by comparing their similarities and differences. In the chemistry classroom, every autistic student is paired with a mentor from their school. In the laboratory, the children are expected to wear goggles, gloves and a lab coat. The chemistry classes in the laboratory were held for four hours with a lunch break, and according to the assistants, the children were engaged the whole time. In classroom of nature science, the students are encouraged to use the interactive exhibition of chemical, physical and mathematical models constructed by the author of this paper. Our students with and without autism attended the lessons in those laboratories. The teacher's goals are: to assist the child in inhibiting and modulating sensory information and support the child in processing a response to sensory stimulation.

Keywords: autism spectrum disorder, science education, sensory integration techniques, student with special educational needs

Procedia PDF Downloads 192
576 12 Real Forensic Caseworks Solved by the DNA STR-Typing of Skeletal Remains Exposed to Extremely Environment Conditions without the Conventional Bone Pulverization Step

Authors: Chiara Della Rocca, Gavino Piras, Andrea Berti, Alessandro Mameli

Abstract:

DNA identification of human skeletal remains plays a valuable role in the forensic field, especially in missing persons and mass disaster investigations. Hard tissues, such as bones and teeth, represent a very common kind of samples analyzed in forensic laboratories because they are often the only biological materials remaining. However, the major limitation of using these compact samples relies on the extremely time–consuming and labor–intensive treatment of grinding them into powder before proceeding with the conventional DNA purification and extraction step. In this context, a DNA extraction assay called the TBone Ex kit (DNA Chip Research Inc.) was developed to digest bone chips without powdering. Here, we simultaneously analyzed bone and tooth samples that arrived at our police laboratory and belonged to 15 different forensic casework that occurred in Sardinia (Italy). A total of 27 samples were recovered from different scenarios and were exposed to extreme environmental factors, including sunlight, seawater, soil, fauna, vegetation, and high temperature and humidity. The TBone Ex kit was used prior to the EZ2 DNA extraction kit on the EZ2 Connect Fx instrument (Qiagen), and high-quality autosomal and Y-chromosome STRs profiles were obtained for the 80% of the caseworks in an extremely short time frame. This study provides additional support for the use of the TBone Ex kit for digesting bone fragments/whole teeth as an effective alternative to pulverization protocols. We empirically demonstrated the effectiveness of the kit in processing multiple bone samples simultaneously, largely simplifying the DNA extraction procedure and the good yield of recovered DNA for downstream genetic typing in highly compromised forensic real specimens. In conclusion, this study turns out to be extremely useful for forensic laboratories, to which the various actors of the criminal justice system – such as potential jury members, judges, defense attorneys, and prosecutors – required immediate feedback.

Keywords: DNA, skeletal remains, bones, tbone ex kit, extreme conditions

Procedia PDF Downloads 46
575 Automated Feature Extraction and Object-Based Detection from High-Resolution Aerial Photos Based on Machine Learning and Artificial Intelligence

Authors: Mohammed Al Sulaimani, Hamad Al Manhi

Abstract:

With the development of Remote Sensing technology, the resolution of optical Remote Sensing images has greatly improved, and images have become largely available. Numerous detectors have been developed for detecting different types of objects. In the past few years, Remote Sensing has benefited a lot from deep learning, particularly Deep Convolution Neural Networks (CNNs). Deep learning holds great promise to fulfill the challenging needs of Remote Sensing and solving various problems within different fields and applications. The use of Unmanned Aerial Systems in acquiring Aerial Photos has become highly used and preferred by most organizations to support their activities because of their high resolution and accuracy, which make the identification and detection of very small features much easier than Satellite Images. And this has opened an extreme era of Deep Learning in different applications not only in feature extraction and prediction but also in analysis. This work addresses the capacity of Machine Learning and Deep Learning in detecting and extracting Oil Leaks from Flowlines (Onshore) using High-Resolution Aerial Photos which have been acquired by UAS fixed with RGB Sensor to support early detection of these leaks and prevent the company from the leak’s losses and the most important thing environmental damage. Here, there are two different approaches and different methods of DL have been demonstrated. The first approach focuses on detecting the Oil Leaks from the RAW Aerial Photos (not processed) using a Deep Learning called Single Shoot Detector (SSD). The model draws bounding boxes around the leaks, and the results were extremely good. The second approach focuses on detecting the Oil Leaks from the Ortho-mosaiced Images (Georeferenced Images) by developing three Deep Learning Models using (MaskRCNN, U-Net and PSP-Net Classifier). Then, post-processing is performed to combine the results of these three Deep Learning Models to achieve a better detection result and improved accuracy. Although there is a relatively small amount of datasets available for training purposes, the Trained DL Models have shown good results in extracting the extent of the Oil Leaks and obtaining excellent and accurate detection.

Keywords: GIS, remote sensing, oil leak detection, machine learning, aerial photos, unmanned aerial systems

Procedia PDF Downloads 34
574 The Role of Artificial Intelligence in Patent Claim Interpretation: Legal Challenges and Opportunities

Authors: Mandeep Saini

Abstract:

The rapid advancement of Artificial Intelligence (AI) is transforming various fields, including intellectual property law. This paper explores the emerging role of AI in interpreting patent claims, a critical and highly specialized area within intellectual property rights. Patent claims define the scope of legal protection granted to an invention, and their precise interpretation is crucial in determining the boundaries of the patent holder's rights. Traditionally, this interpretation has relied heavily on the expertise of patent examiners, legal professionals, and judges. However, the increasing complexity of modern inventions, especially in fields like biotechnology, software, and electronics, poses significant challenges to human interpretation. Introducing AI into patent claim interpretation raises several legal and ethical concerns. This paper addresses critical issues such as the reliability of AI-driven interpretations, the potential for algorithmic bias, and the lack of transparency in AI decision-making processes. It considers the legal implications of relying on AI, particularly regarding accountability for errors and the potential challenges to AI interpretations in court. The paper includes a comparative study of AI-driven patent claim interpretations versus human interpretations across different jurisdictions to provide a comprehensive analysis. This comparison highlights the variations in legal standards and practices, offering insights into how AI could impact the harmonization of international patent laws. The paper proposes policy recommendations for the responsible use of AI in patent law. It suggests legal frameworks that ensure AI tools complement, rather than replace, human expertise in patent claim interpretation. These recommendations aim to balance the benefits of AI with the need for maintaining trust, transparency, and fairness in the legal process. By addressing these critical issues, this research contributes to the ongoing discourse on integrating AI into the legal field, specifically within intellectual property rights. It provides a forward-looking perspective on how AI could reshape patent law, offering both opportunities for innovation and challenges that must be carefully managed to protect the integrity of the legal system.

Keywords: artificial intelligence (ai), patent claim interpretation, intellectual property rights, algorithmic bias, natural language processing, patent law harmonization, legal ethics

Procedia PDF Downloads 21
573 The Impact of Electrospinning Parameters on Surface Morphology and Chemistry of PHBV Fibers

Authors: Lukasz Kaniuk, Mateusz M. Marzec, Andrzej Bernasik, Urszula Stachewicz

Abstract:

Electrospinning is one of the commonly used methods to produce micro- or nano-fibers. The properties of electrospun fibers allow them to be used to produce tissue scaffolds, biodegradable bandages, or purification membranes. The morphology of the obtained fibers depends on the composition of the polymer solution as well as the processing parameters. Interesting properties such as high fiber porosity can be achieved by changing humidity during electrospinning. Moreover, by changing voltage polarity in electrospinning, we are able to alternate functional groups at the surface of fibers. In this study, electrospun fibers were made of natural, thermoplastic polyester – PHBV (poly(3-hydroxybutyric acid-co-3-hydrovaleric acid). The fibrous mats were obtained using both positive and negative voltage polarities, and their surface was characterized using X-ray photoelectron spectroscopy (XPS, Ulvac-Phi, Chigasaki, Japan). Furthermore, the effect of the humidity on surface morphology was investigated using scanning electron microscopy (SEM, Merlin Gemini II, Zeiss, Germany). Electrospun PHBV fibers produced with positive and negative voltage polarity had similar morphology and the average fiber diameter, 2.47 ± 0.21 µm and 2.44 ± 0.15 µm, respectively. The change of the voltage polarity had a significant impact on the reorientation of the carbonyl groups what consequently changed the surface potential of the electrospun PHBV fibers. The increase of humidity during electrospinning causes porosity in the surface structure of the fibers. In conclusion, we showed within our studies that the process parameters such as humidity and voltage polarity have a great influence on fiber morphology and chemistry, changing their functionality. Surface properties of polymer fiber have a significant impact on cell integration and attachment, which is very important in tissue engineering. The possibility of changing surface porosity allows the use of fibers in various tissue engineering and drug delivery systems. Acknowledgment: This study was conducted within 'Nanofiber-based sponges for atopic skin treatment' project., carried out within the First TEAM programme of the Foundation for Polish Science co-financed by the European Union under the European Regional Development Fund, project no POIR.04.04.00-00- 4571/18-00.

Keywords: cells integration, electrospun fiber, PHBV, surface characterization

Procedia PDF Downloads 118
572 A Computational Fluid Dynamics Simulation of Single Rod Bundles with 54 Fuel Rods without Spacers

Authors: S. K. Verma, S. L. Sinha, D. K. Chandraker

Abstract:

The Advanced Heavy Water Reactor (AHWR) is a vertical pressure tube type, heavy water moderated and boiling light water cooled natural circulation based reactor. The fuel bundle of AHWR contains 54 fuel rods arranged in three concentric rings of 12, 18 and 24 fuel rods. This fuel bundle is divided into a number of imaginary interacting flow passage called subchannels. Single phase flow condition exists in reactor rod bundle during startup condition and up to certain length of rod bundle when it is operating at full power. Prediction of the thermal margin of the reactor during startup condition has necessitated the determination of the turbulent mixing rate of coolant amongst these subchannels. Thus, it is vital to evaluate turbulent mixing between subchannels of AHWR rod bundle. With the remarkable progress in the computer processing power, the computational fluid dynamics (CFD) methodology can be useful for investigating the thermal–hydraulic characteristics phenomena in the nuclear fuel assembly. The present report covers the results of simulation of pressure drop, velocity variation and turbulence intensity on single rod bundle with 54 rods in circular arrays. In this investigation, 54-rod assemblies are simulated with ANSYS Fluent 15 using steady simulations with an ANSYS Workbench meshing. The simulations have been carried out with water for Reynolds number 9861.83. The rod bundle has a mean flow area of 4853.0584 mm2 in the bare region with the hydraulic diameter of 8.105 mm. In present investigation, a benchmark k-ε model has been used as a turbulence model and the symmetry condition is set as boundary conditions. Simulation are carried out to determine the turbulent mixing rate in the simulated subchannels of the reactor. The size of rod and the pitch in the test has been same as that of actual rod bundle in the prototype. Water has been used as the working fluid and the turbulent mixing tests have been carried out at atmospheric condition without heat addition. The mean velocity in the subchannel has been varied from 0-1.2 m/s. The flow conditions are found to be closer to the actual reactor condition.

Keywords: AHWR, CFD, single-phase turbulent mixing rate, thermal–hydraulic

Procedia PDF Downloads 320
571 Comfort Sensor Using Fuzzy Logic and Arduino

Authors: Samuel John, S. Sharanya

Abstract:

Automation has become an important part of our life. It has been used to control home entertainment systems, changing the ambience of rooms for different events etc. One of the main parameters to control in a smart home is the atmospheric comfort. Atmospheric comfort mainly includes temperature and relative humidity. In homes, the desired temperature of different rooms varies from 20 °C to 25 °C and relative humidity is around 50%. However, it varies widely. Hence, automated measurement of these parameters to ensure comfort assumes significance. To achieve this, a fuzzy logic controller using Arduino was developed using MATLAB. Arduino is an open source hardware consisting of a 24 pin ATMEGA chip (atmega328), 14 digital input /output pins and an inbuilt ADC. It runs on 5v and 3.3v power supported by a board voltage regulator. Some of the digital pins in Aruduino provide PWM (pulse width modulation) signals, which can be used in different applications. The Arduino platform provides an integrated development environment, which includes support for c, c++ and java programming languages. In the present work, soft sensor was introduced in this system that can indirectly measure temperature and humidity and can be used for processing several measurements these to ensure comfort. The Sugeno method (output variables are functions or singleton/constant, more suitable for implementing on microcontrollers) was used in the soft sensor in MATLAB and then interfaced to the Arduino, which is again interfaced to the temperature and humidity sensor DHT11. The temperature-humidity sensor DHT11 acts as the sensing element in this system. Further, a capacitive humidity sensor and a thermistor were also used to support the measurement of temperature and relative humidity of the surrounding to provide a digital signal on the data pin. The comfort sensor developed was able to measure temperature and relative humidity correctly. The comfort percentage was calculated and accordingly the temperature in the room was controlled. This system was placed in different rooms of the house to ensure that it modifies the comfort values depending on temperature and relative humidity of the environment. Compared to the existing comfort control sensors, this system was found to provide an accurate comfort percentage. Depending on the comfort percentage, the air conditioners and the coolers in the room were controlled. The main highlight of the project is its cost efficiency.

Keywords: arduino, DHT11, soft sensor, sugeno

Procedia PDF Downloads 312
570 Anodic Stability of Li₆PS₅Cl/PEO Composite Polymer Electrolytes for All-Solid-State Lithium Batteries: A First-Principles Molecular Dynamics Study

Authors: Hao-Wen Chang, Santhanamoorthi Nachimuthu, Jyh-Chiang Jiang

Abstract:

All-solid-state lithium batteries (ASSLBs) are increasingly recognized as a safer and more reliable alternative to conventional lithium-ion batteries due to their non-flammable nature and enhanced safety performance. ASSLBs utilize a range of solid-state electrolytes, including solid polymer electrolytes (SPEs), inorganic solid electrolytes (ISEs), and composite polymer electrolytes (CPEs). SPEs are particularly valued for their flexibility, ease of processing, and excellent interfacial compatibility with electrodes, though their ionic conductivity remains a significant limitation. ISEs, on the other hand, provide high ionic conductivity, broad electrochemical windows, and strong mechanical properties but often face poor interfacial contact with electrodes, impeding performance. CPEs, which merge the strengths of SPEs and ISEs, represent a compelling solution for next-generation ASSLBs by addressing both electrochemical and mechanical challenges. Despite their potential, the mechanisms governing lithium-ion transport within these systems remain insufficiently understood. In this study, we designed CPEs based on argyrodite-type Li₆PS₅Cl (LPSC) combined with two distinct polymer matrices: poly(ethylene oxide) (PEO) with 24.5 wt% lithium bis(trifluoromethane)sulfonimide (LiTFSI) and polycaprolactone (PCL) with 25.7 wt% LiTFSI. Through density functional theory (DFT) calculations, we investigated the interfacial chemistry of these materials, revealing critical insights into their stability and interactions. Additionally, ab initio molecular dynamics (AIMD) simulations of lithium electrodes interfaced with LPSC layers containing polymers and LiTFSI demonstrated that the polymer matrix significantly mitigates LPSC decomposition, compared to systems with only a lithium electrode and LPSC layers. These findings underscore the pivotal role of CPEs in improving the performance and longevity of ASSLBs, offering a promising path forward for next-generation energy storage technologies.

Keywords: all-solid-state lithium-ion batteries, composite solid electrolytes, DFT calculations, Li-ion transport

Procedia PDF Downloads 20
569 A Study of Female Casino Dealers' Job Stress and Job Satisfaction: The Case of Macau

Authors: Xinrong Zong, Tao Zhang

Abstract:

Macau is known as the Oriental Monte Carlo and its economy depends on gambling heavily. The dealer is the key position of the gambling industry, at the end of the fourth quarter of 2015, there were over 24,000 dealers among the 56,000 full-time employees in gambling industry. More than half of dealers were female. The dealer is also called 'Croupier', the main responsibilities of them are shuffling, dealing, processing chips, rolling dice game and inspecting play. Due to the limited land and small population of Macao, the government has not allowed hiring foreign domestic dealers since Macao developed temporary gambling industry. Therefore, local dealers enjoy special advantages but also bear the high stresses from work. From the middle of last year, with the reduced income of gambling, and the decline of mainland gamblers as well as VIP lounges, the working time of dealers increased greatly. Thus, many problems occurred in this condition, such as the rise of working pressures, psychological pressures and family-responsibility pressures, which may affect job satisfaction as well. Because of the less research of dealer satisfaction, and a lack of standing on feminine perspective to analyze female dealers, this study will focus on investigating the relationship between working pressure and job satisfaction from feminine view. Several issues will be discussed specifically: firstly, to understand current situation of the working pressures and job satisfactions of female dealers in different ages; secondly, to research if there is any relevance between working pressures and job satisfactions of female dealers in different ages; thirdly, to find out the relationship between dealers' working pressures and job satisfactions in different ages. This paper combined qualitative approach with quantitative approach selected samples by convenient sampling. The research showed the female dealers from diverse ages have different kinds of working pressures; second, job satisfactions of the female dealers in different ages are dissimilar; moreover, there is negative correlation between working pressure and job satisfaction of female dealer in different ages' groups; last but not the least, working pressure has a significant negative impact on job satisfaction. The research result will provide a reference value for the Macau gambling business. It is a pattern to improve dealers' working environment, to increase employees' job satisfaction, as well as to offer tourists a better service, which can help to attract more and more visitors from a good image of Macau gaming and tourism.

Keywords: female dealers, job satisfaction, working pressure, Macau

Procedia PDF Downloads 297
568 Performance Demonstration of Extendable NSPO Space-Borne GPS Receiver

Authors: Hung-Yuan Chang, Wen-Lung Chiang, Kuo-Liang Wu, Chen-Tsung Lin

Abstract:

National Space Organization (NSPO) has completed in 2014 the development of a space-borne GPS receiver, including design, manufacture, comprehensive functional test, environmental qualification test and so on. The main performance of this receiver include 8-meter positioning accuracy, 0.05 m/sec speed-accuracy, the longest 90 seconds of cold start time, and up to 15g high dynamic scenario. The receiver will be integrated in the autonomous FORMOSAT-7 NSPO-Built satellite scheduled to be launched in 2019 to execute pre-defined scientific missions. The flight model of this receiver manufactured in early 2015 will pass comprehensive functional tests and environmental acceptance tests, etc., which are expected to be completed by the end of 2015. The space-borne GPS receiver is a pure software design in which all GPS baseband signal processing are executed by a digital signal processor (DSP), currently only 50% of its throughput being used. In response to the booming global navigation satellite systems, NSPO will gradually expand this receiver to become a multi-mode, multi-band, high-precision navigation receiver, and even a science payload, such as the reflectometry receiver of a global navigation satellite system. The fundamental purpose of this extension study is to port some software algorithms such as signal acquisition and correlation, reused code and large amount of computation load to the FPGA whose processor is responsible for operational control, navigation solution, and orbit propagation and so on. Due to the development and evolution of the FPGA is pretty fast, the new system architecture upgraded via an FPGA should be able to achieve the goal of being a multi-mode, multi-band high-precision navigation receiver, or scientific receiver. Finally, the results of tests show that the new system architecture not only retains the original overall performance, but also sets aside more resources available for future expansion possibility. This paper will explain the detailed DSP/FPGA architecture, development, test results, and the goals of next development stage of this receiver.

Keywords: space-borne, GPS receiver, DSP, FPGA, multi-mode multi-band

Procedia PDF Downloads 369
567 The KAPSARC Energy Policy Database: Introducing a Quantified Library of China's Energy Policies

Authors: Philipp Galkin

Abstract:

Government policy is a critical factor in the understanding of energy markets. Regardless, it is rarely approached systematically from a research perspective. Gaining a precise understanding of what policies exist, their intended outcomes, geographical extent, duration, evolution, etc. would enable the research community to answer a variety of questions that, for now, are either oversimplified or ignored. Policy, on its surface, also seems a rather unstructured and qualitative undertaking. There may be quantitative components, but incorporating the concept of policy analysis into quantitative analysis remains a challenge. The KAPSARC Energy Policy Database (KEPD) is intended to address these two energy policy research limitations. Our approach is to represent policies within a quantitative library of the specific policy measures contained within a set of legal documents. Each of these measures is recorded into the database as a single entry characterized by a set of qualitative and quantitative attributes. Initially, we have focused on the major laws at the national level that regulate coal in China. However, KAPSARC is engaged in various efforts to apply this methodology to other energy policy domains. To ensure scalability and sustainability of our project, we are exploring semantic processing using automated computer algorithms. Automated coding can provide a more convenient input data for human coders and serve as a quality control option. Our initial findings suggest that the methodology utilized in KEPD could be applied to any set of energy policies. It also provides a convenient tool to facilitate understanding in the energy policy realm enabling the researcher to quickly identify, summarize, and digest policy documents and specific policy measures. The KEPD captures a wide range of information about each individual policy contained within a single policy document. This enables a variety of analyses, such as structural comparison of policy documents, tracing policy evolution, stakeholder analysis, and exploring interdependencies of policies and their attributes with exogenous datasets using statistical tools. The usability and broad range of research implications suggest a need for the continued expansion of the KEPD to encompass a larger scope of policy documents across geographies and energy sectors.

Keywords: China, energy policy, policy analysis, policy database

Procedia PDF Downloads 323
566 Nonlinear Evolution of the Pulses of Elastic Waves in Geological Materials

Authors: Elena B. Cherepetskaya, Alexander A. Karabutov, Natalia B. Podymova, Ivan Sas

Abstract:

Nonlinear evolution of broadband ultrasonic pulses passed through the rock specimens is studied using the apparatus ‘GEOSCAN-02M’. Ultrasonic pulses are excited by the pulses of Q-switched Nd:YAG laser with the time duration of 10 ns and with the energy of 260 mJ. This energy can be reduced to 20 mJ by some light filters. The laser beam radius did not exceed 5 mm. As a result of the absorption of the laser pulse in the special material – the optoacoustic generator–the pulses of longitudinal ultrasonic waves are excited with the time duration of 100 ns and with the maximum pressure amplitude of 10 MPa. The immersion technique is used to measure the parameters of these ultrasonic pulses passed through a specimen, the immersion liquid is distilled water. The reference pulse passed through the cell with water has the compression and the rarefaction phases. The amplitude of the rarefaction phase is five times lower than that of the compression phase. The spectral range of the reference pulse reaches 10 MHz. The cubic-shaped specimens of the Karelian gabbro are studied with the rib length 3 cm. The ultimate strength of the specimens by the uniaxial compression is (300±10) MPa. As the reference pulse passes through the area of the specimen without cracks the compression phase decreases and the rarefaction one increases due to diffraction and scattering of ultrasound, so the ratio of these phases becomes 2.3:1. After preloading some horizontal cracks appear in the specimens. Their location is found by one-sided scanning of the specimen using the backward mode detection of the ultrasonic pulses reflected from the structure defects. Using the computer processing of these signals the images are obtained of the cross-sections of the specimens with cracks. By the increase of the reference pulse amplitude from 0.1 MPa to 5 MPa the nonlinear transformation of the ultrasonic pulse passed through the specimen with horizontal cracks results in the decrease by 2.5 times of the amplitude of the rarefaction phase and in the increase of its duration by 2.1 times. By the increase of the reference pulse amplitude from 5 MPa to 10 MPa the time splitting of the phases is observed for the bipolar pulse passed through the specimen. The compression and rarefaction phases propagate with different velocities. These features of the powerful broadband ultrasonic pulses passed through the rock specimens can be described by the hysteresis model of Preisach-Mayergoyz and can be used for the location of cracks in the optically opaque materials.

Keywords: cracks, geological materials, nonlinear evolution of ultrasonic pulses, rock

Procedia PDF Downloads 350
565 The Thinking of Dynamic Formulation of Rock Aging Agent Driven by Data

Authors: Longlong Zhang, Xiaohua Zhu, Ping Zhao, Yu Wang

Abstract:

The construction of mines, railways, highways, water conservancy projects, etc., have formed a large number of high steep slope wounds in China. Under the premise of slope stability and safety, the minimum cost, green and close to natural wound space repair, has become a new problem. Nowadays, in situ element testing and analysis, monitoring, field quantitative factor classification, and assignment evaluation will produce vast amounts of data. Data processing and analysis will inevitably differentiate the morphology, mineral composition, physicochemical properties between rock wounds, by which to dynamically match the appropriate techniques and materials for restoration. In the present research, based on the grid partition of the slope surface, tested the content of the combined oxide of rock mineral (SiO₂, CaO, MgO, Al₂O₃, Fe₃O₄, etc.), and classified and assigned values to the hardness and breakage of rock texture. The data of essential factors are interpolated and normalized in GIS, which formed the differential zoning map of slope space. According to the physical and chemical properties and spatial morphology of rocks in different zones, organic acids (plant waste fruit, fruit residue, etc.), natural mineral powder (zeolite, apatite, kaolin, etc.), water-retaining agent, and plant gum (melon powder) were mixed in different proportions to form rock aging agents. To spray the aging agent with different formulas on the slopes in different sections can affectively age the fresh rock wound, providing convenience for seed implantation, and reducing the transformation of heavy metals in the rocks. Through many practical engineering practices, a dynamic data platform of rock aging agent formula system is formed, which provides materials for the restoration of different slopes. It will also provide a guideline for the mixed-use of various natural materials to solve the complex, non-uniformity ecological restoration problem.

Keywords: data-driven, dynamic state, high steep slope, rock aging agent, wounds

Procedia PDF Downloads 115
564 Wave State of Self: Findings of Synchronistic Patterns in the Collective Unconscious

Authors: R. Dimitri Halley

Abstract:

The research within Jungian Psychology presented here is on the wave state of Self. What has been discovered via shared dreaming, independently correlating dreams across dreamers, is beyond the Self stage into the deepest layer or the wave state Self: the very quantum ocean, the Self archetype is embedded in. A quantum wave or rhyming of meaning constituting synergy across several dreamers was discovered in dreams and in extensively shared dream work with small groups at a post therapy stage. Within the format of shared dreaming, we find synergy patterns beyond what Jung called the Self archetype. Jung led us up to the phase of Individuation and delivered the baton to Von Franz to work out the next synchronistic stage, here proposed as the finding of the quantum patterns making up the wave state of Self. These enfolded synchronistic patterns have been found in group format of shared dreaming of individuals approximating individuation, and the unfolding of it is carried by belief and faith. The reason for this format and operating system is because beyond therapy and of living reality, we find no science – no thinking or even awareness in the therapeutic sense – but rather a state of mental processing resembling more like that of spiritual attitude. Thinking as such is linear and cannot contain the deepest layer of Self, the quantum core of the human being. It is self reflection which is the container for the process at the wave state of Self. Observation locks us in an outside-in reactive flow from a first-person perspective and hence toward the surface we see to believe, whereas here, the direction of focus shifts to inside out/intrinsic. The operating system or language at the wave level of Self is thus belief and synchronicity. Belief has up to now been almost the sole province of organized religions but was viewed by Jung as an inherent property in the process of Individuation. The shared dreaming stage of the synchronistic patterns forms a larger story constituting a deep connectivity unfolding around individual Selves. Dreams of independent dreamers form larger patterns that come together as puzzles forming a larger story, and in this sense, this group work level builds on Jung as a post individuation collective stage. Shared dream correlations will be presented, illustrating a larger story in terms of trails of shared synchronicity.

Keywords: belief, shared dreaming, synchronistic patterns, wave state of self

Procedia PDF Downloads 196
563 NanoFrazor Lithography for advanced 2D and 3D Nanodevices

Authors: Zhengming Wu

Abstract:

NanoFrazor lithography systems were developed as a first true alternative or extension to standard mask-less nanolithography methods like electron beam lithography (EBL). In contrast to EBL they are based on thermal scanning probe lithography (t-SPL). Here a heatable ultra-sharp probe tip with an apex of a few nm is used for patterning and simultaneously inspecting complex nanostructures. The heat impact from the probe on a thermal responsive resist generates those high-resolution nanostructures. The patterning depth of each individual pixel can be controlled with better than 1 nm precision using an integrated in-situ metrology method. Furthermore, the inherent imaging capability of the Nanofrazor technology allows for markerless overlay, which has been achieved with sub-5 nm accuracy as well as it supports stitching layout sections together with < 10 nm error. Pattern transfer from such resist features below 10 nm resolution were demonstrated. The technology has proven its value as an enabler of new kinds of ultra-high resolution nanodevices as well as for improving the performance of existing device concepts. The application range for this new nanolithography technique is very broad spanning from ultra-high resolution 2D and 3D patterning to chemical and physical modification of matter at the nanoscale. Nanometer-precise markerless overlay and non-invasiveness to sensitive materials are among the key strengths of the technology. However, while patterning at below 10 nm resolution is achieved, significantly increasing the patterning speed at the expense of resolution is not feasible by using the heated tip alone. Towards this end, an integrated laser write head for direct laser sublimation (DLS) of the thermal resist has been introduced for significantly faster patterning of micrometer to millimeter-scale features. Remarkably, the areas patterned by the tip and the laser are seamlessly stitched together and both processes work on the very same resist material enabling a true mix-and-match process with no developing or any other processing steps in between. The presentation will include examples for (i) high-quality metal contacting of 2D materials, (ii) tuning photonic molecules, (iii) generating nanofluidic devices and (iv) generating spintronic circuits. Some of these applications have been enabled only due to the various unique capabilities of NanoFrazor lithography like the absence of damage from a charged particle beam.

Keywords: nanofabrication, grayscale lithography, 2D materials device, nano-optics, photonics, spintronic circuits

Procedia PDF Downloads 72
562 The Golden Bridge for Better Farmers Life

Authors: Giga Rahmah An-Nafisah, Lailatus Syifa Kamilah

Abstract:

Agriculture today, especially in Indonesia have globally improved. Since the election of the new president, who in the program of work priority the food self-sufficiency. Many ways and attempts have been planned carefully. All this is done to maximize agricultural production for the future. But if we look from another side, there is something missing. Yes! Improvement of life safety of the farmers, useless we fix all agricultural processing systems to maximize agricultural output, but the Hero of agriculture itself it does not change towards a better life. Yes, broker or middleman system agriculture results. Broker system or middleman this is the real problem facing farmers for their welfare. How come? As much as agriculture result, but if farmers were sell into middlemen with very low prices, then there will be no progress for their welfare. Broker system who do the actual middlemen should not happen in the current agricultural system, because the agriculture condition currently being concern, they would still be able to reap a profit as much as possible, no matter how miserable farmers manage the farm and currently face import competition this cannot be avoided anymore. This phenomenon is already visible plain sight all, who see it. Why? Because farmers those who fell victim cannot do anything to change this system. It is true, if only these middlemen who want to receive it for the sale of agricultural products, or arguably the only system that is the bridge realtor economic life of the farmers. The problem is that we should strive for the welfare of the heroes of our food. A golden bridge that could save them that, are the government. Why? Because the government can more easily with the powers to stop this broker system compared to other parties. The government supposed to be a bridge connecting the farmers with consumers or the people themselves. Yes, with improved broker system becomes: buy agricultural produce with highest prices to farmers and selling of agricultural products with lowest price to the consumer or the people themselves. And then the next question about the fate of middlemen? The system indirectly realtor is like system corruption. Why? Because the definition of corruption is an activity that is detrimental to the victim without being noticed by anyone continue to enrich himself and his victim's life miserable. Government may transfer performance of the middlemen into the idea of a new bridge that is done by the government itself. The government could lift them into this new bridge system employs them to remain a distributor of agricultural products themselves, but under the new policy made by the government to keep improving the welfare of farmers. This idea is made is not going to have much effect would improve the welfare of farmers, but most/least this idea will bring around many people for helping conscience farmers to the government, through the daily chatter, as well as celebrity gossip can quickly know too many people.

Keywords: broker system, farmers live, government, agricultural economics

Procedia PDF Downloads 294
561 Automatic Detection of Sugarcane Diseases: A Computer Vision-Based Approach

Authors: Himanshu Sharma, Karthik Kumar, Harish Kumar

Abstract:

The major problem in crop cultivation is the occurrence of multiple crop diseases. During the growth stage, timely identification of crop diseases is paramount to ensure the high yield of crops, lower production costs, and minimize pesticide usage. In most cases, crop diseases produce observable characteristics and symptoms. The Surveyors usually diagnose crop diseases when they walk through the fields. However, surveyor inspections tend to be biased and error-prone due to the nature of the monotonous task and the subjectivity of individuals. In addition, visual inspection of each leaf or plant is costly, time-consuming, and labour-intensive. Furthermore, the plant pathologists and experts who can often identify the disease within the plant according to their symptoms in early stages are not readily available in remote regions. Therefore, this study specifically addressed early detection of leaf scald, red rot, and eyespot types of diseases within sugarcane plants. The study proposes a computer vision-based approach using a convolutional neural network (CNN) for automatic identification of crop diseases. To facilitate this, firstly, images of sugarcane diseases were taken from google without modifying the scene, background, or controlling the illumination to build the training dataset. Then, the testing dataset was developed based on the real-time collected images from the sugarcane field from India. Then, the image dataset is pre-processed for feature extraction and selection. Finally, the CNN-based Visual Geometry Group (VGG) model was deployed on the training and testing dataset to classify the images into diseased and healthy sugarcane plants and measure the model's performance using various parameters, i.e., accuracy, sensitivity, specificity, and F1-score. The promising result of the proposed model lays the groundwork for the automatic early detection of sugarcane disease. The proposed research directly sustains an increase in crop yield.

Keywords: automatic classification, computer vision, convolutional neural network, image processing, sugarcane disease, visual geometry group

Procedia PDF Downloads 116