Search results for: input impedance
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2626

Search results for: input impedance

436 Design and Modeling of Human Middle Ear for Harmonic Response Analysis

Authors: Shende Suraj Balu, A. B. Deoghare, K. M. Pandey

Abstract:

The human middle ear (ME) is a delicate and vital organ. It has a complex structure that performs various functions such as receiving sound pressure and producing vibrations of eardrum and propagating it to inner ear. It consists of Tympanic Membrane (TM), three auditory ossicles, various ligament structures and muscles. Incidents such as traumata, infections, ossification of ossicular structures and other pathologies may damage the ME organs. The conditions can be surgically treated by employing prosthesis. However, the suitability of the prosthesis needs to be examined in advance prior to the surgery. Few decades ago, this issue was addressed and analyzed by developing an equivalent representation either in the form of spring mass system, electrical system using R-L-C circuit or developing an approximated CAD model. But, nowadays a three-dimensional ME model can be constructed using micro X-Ray Computed Tomography (μCT) scan data. Moreover, the concern about patient specific integrity pertaining to the disease can be examined well in advance. The current research work emphasizes to develop the ME model from the stacks of μCT images which are used as input file to MIMICS Research 19.0 (Materialise Interactive Medical Image Control System) software. A stack of CT images is converted into geometrical surface model to build accurate morphology of ME. The work is further extended to understand the dynamic behaviour of Harmonic response of the stapes footplate and umbo for different sound pressure levels applied at lateral side of eardrum using finite element approach. The pathological condition Cholesteatoma of ME is investigated to obtain peak to peak displacement of stapes footplate and umbo. Apart from this condition, other pathologies, mainly, changes in the stiffness of stapedial ligament, TM thickness and ossicular chain separation and fixation are also explored. The developed model of ME for pathologies is validated by comparing the results available in the literatures and also with the results of a normal ME to calculate the percentage loss in hearing capability.

Keywords: computed tomography (μCT), human middle ear (ME), harmonic response, pathologies, tympanic membrane (TM)

Procedia PDF Downloads 175
435 Increased Energy Efficiency and Improved Product Quality in Processing of Lithium Bearing Ores by Applying Fluidized-Bed Calcination Systems

Authors: Edgar Gasafi, Robert Pardemann, Linus Perander

Abstract:

For the production of lithium carbonate or hydroxide out of lithium bearing ores, a thermal activation (calcination/decrepitation) is required for the phase transition in the mineral to enable an acid respectively soda leaching in the downstream hydrometallurgical section. In this paper, traditional processing in Lithium industry is reviewed, and opportunities to reduce energy consumption and improve product quality and recovery rate will be discussed. The conventional process approach is still based on rotary kiln calcination, a technology in use since the early days of lithium ore processing, albeit not significantly further developed since. A new technology, at least for the Lithium industry, is fluidized bed calcination. Decrepitation of lithium ore was investigated at Outotec’s Frankfurt Research Centre. Focusing on fluidized bed technology, a study of major process parameters (temperature and residence time) was performed at laboratory and larger bench scale aiming for optimal product quality for subsequent processing. The technical feasibility was confirmed for optimal process conditions on pilot scale (400 kg/h feed input) providing the basis for industrial process design. Based on experimental results, a comprehensive Aspen Plus flow sheet simulation was developed to quantify mass and energy flow for the rotary kiln and fluidized bed system. Results show a significant reduction in energy consumption and improved process performance in terms of temperature profile, product quality and plant footprint. The major conclusion is that a substantial reduction of energy consumption can be achieved in processing Lithium bearing ores by using fluidized bed based systems. At the same time and different from rotary kiln process, an accurate temperature and residence time control is ensured in fluidized-bed systems leading to a homogenous temperature profile in the reactor which prevents overheating and sintering of the solids and results in uniform product quality.

Keywords: calcination, decrepitation, fluidized bed, lithium, spodumene

Procedia PDF Downloads 230
434 Exoskeleton Response During Infant Physiological Knee Kinematics And Dynamics

Authors: Breanna Macumber, Victor A. Huayamave, Emir A. Vela, Wangdo Kim, Tamara T. Chamber, Esteban Centeno

Abstract:

Spina bifida is a type of neural tube defect that affects the nervous system and can lead to problems such as total leg paralysis. Treatment requires physical therapy and rehabilitation. Robotic exoskeletons have been used for rehabilitation to train muscle movement and assist in injury recovery; however, current models focus on the adult populations and not on the infant population. The proposed framework aims to couple a musculoskeletal infant model with a robotic exoskeleton using vacuum-powered artificial muscles to provide rehabilitation to infants affected by spina bifida. The study that drove the input values for the robotic exoskeleton used motion capture technology to collect data from the spontaneous kicking movement of a 2.4-month-old infant lying supine. OpenSim was used to develop the musculoskeletal model, and Inverse kinematics was used to estimate hip joint angles. A total of 4 kicks (A, B, C, D) were selected, and the selection was based on range, transient response, and stable response. Kicks had at least 5° of range of motion with a smooth transient response and a stable period. The robotic exoskeleton used a Vacuum-Powered Artificial Muscle (VPAM) the structure comprised of cells that were clipped in a collapsed state and unclipped when desired to simulate infant’s age. The artificial muscle works with vacuum pressure. When air is removed, the muscle contracts and when air is added, the muscle relaxes. Bench testing was performed using a 6-month-old infant mannequin. The previously developed exoskeleton worked really well with controlled ranges of motion and frequencies, which are typical of rehabilitation protocols for infants suffering with spina bifida. However, the random kicking motion in this study contained high frequency kicks and was not able to accurately replicate all the investigated kicks. Kick 'A' had a greater error when compared to the other kicks. This study has the potential to advance the infant rehabilitation field.

Keywords: musculoskeletal modeling, soft robotics, rehabilitation, pediatrics

Procedia PDF Downloads 83
433 Computational Pipeline for Lynch Syndrome Detection: Integrating Alignment, Variant Calling, and Annotations

Authors: Rofida Gamal, Mostafa Mohammed, Mariam Adel, Marwa Gamal, Marwa kamal, Ayat Saber, Maha Mamdouh, Amira Emad, Mai Ramadan

Abstract:

Lynch Syndrome is an inherited genetic condition associated with an increased risk of colorectal and other cancers. Detecting Lynch Syndrome in individuals is crucial for early intervention and preventive measures. This study proposes a computational pipeline for Lynch Syndrome detection by integrating alignment, variant calling, and annotation. The pipeline leverages popular tools such as FastQC, Trimmomatic, BWA, bcftools, and ANNOVAR to process the input FASTQ file, perform quality trimming, align reads to the reference genome, call variants, and annotate them. It is believed that the computational pipeline was applied to a dataset of Lynch Syndrome cases, and its performance was evaluated. It is believed that the quality check step ensured the integrity of the sequencing data, while the trimming process is thought to have removed low-quality bases and adaptors. In the alignment step, it is believed that the reads were accurately mapped to the reference genome, and the subsequent variant calling step is believed to have identified potential genetic variants. The annotation step is believed to have provided functional insights into the detected variants, including their effects on known Lynch Syndrome-associated genes. The results obtained from the pipeline revealed Lynch Syndrome-related positions in the genome, providing valuable information for further investigation and clinical decision-making. The pipeline's effectiveness was demonstrated through its ability to streamline the analysis workflow and identify potential genetic markers associated with Lynch Syndrome. It is believed that the computational pipeline presents a comprehensive and efficient approach to Lynch Syndrome detection, contributing to early diagnosis and intervention. The modularity and flexibility of the pipeline are believed to enable customization and adaptation to various datasets and research settings. Further optimization and validation are believed to be necessary to enhance performance and applicability across diverse populations.

Keywords: Lynch Syndrome, computational pipeline, alignment, variant calling, annotation, genetic markers

Procedia PDF Downloads 76
432 Comparative Analysis of Reinforcement Learning Algorithms for Autonomous Driving

Authors: Migena Mana, Ahmed Khalid Syed, Abdul Malik, Nikhil Cherian

Abstract:

In recent years, advancements in deep learning enabled researchers to tackle the problem of self-driving cars. Car companies use huge datasets to train their deep learning models to make autonomous cars a reality. However, this approach has certain drawbacks in that the state space of possible actions for a car is so huge that there cannot be a dataset for every possible road scenario. To overcome this problem, the concept of reinforcement learning (RL) is being investigated in this research. Since the problem of autonomous driving can be modeled in a simulation, it lends itself naturally to the domain of reinforcement learning. The advantage of this approach is that we can model different and complex road scenarios in a simulation without having to deploy in the real world. The autonomous agent can learn to drive by finding the optimal policy. This learned model can then be easily deployed in a real-world setting. In this project, we focus on three RL algorithms: Q-learning, Deep Deterministic Policy Gradient (DDPG), and Proximal Policy Optimization (PPO). To model the environment, we have used TORCS (The Open Racing Car Simulator), which provides us with a strong foundation to test our model. The inputs to the algorithms are the sensor data provided by the simulator such as velocity, distance from side pavement, etc. The outcome of this research project is a comparative analysis of these algorithms. Based on the comparison, the PPO algorithm gives the best results. When using PPO algorithm, the reward is greater, and the acceleration, steering angle and braking are more stable compared to the other algorithms, which means that the agent learns to drive in a better and more efficient way in this case. Additionally, we have come up with a dataset taken from the training of the agent with DDPG and PPO algorithms. It contains all the steps of the agent during one full training in the form: (all input values, acceleration, steering angle, break, loss, reward). This study can serve as a base for further complex road scenarios. Furthermore, it can be enlarged in the field of computer vision, using the images to find the best policy.

Keywords: autonomous driving, DDPG (deep deterministic policy gradient), PPO (proximal policy optimization), reinforcement learning

Procedia PDF Downloads 147
431 A Voice Retrieved from the Holocaust in New Journalism in Kazuo Ishiguro's the Remains of the Day

Authors: Masami Usui

Abstract:

Kazuo Ishiguro’s The Remains of the Day (1989) underlines another holocaust, an imprisonment of human life, dignity, and self in the globalizing sphere of the twentieth century. The Remains of the Day delineates the invisible and cruel space of “lost and found” in the postcolonial and post-imperial discourse of this century, that is, the Holocaust. The context of the concentration camp or wartime imprisonment such as Auschwitz is transplanted into the public sphere of modern England, Darlington Hall. The voice is retrieved and expressed by the young journalist and heir of Darlington Hall, Mr. David Cardinal. The new media of journalism is an intruder at Darlington Hall and plays a role in revealing the wrongly-input ideology. “Lost and Found” consists of the private and public retrieved voices. Stevens’ journey in 1956 is a return to the past, especially the period between 1935 and 1936. Lost time is retrieved on his journey; yet lost life cannot be revived entirely in his remains of life. The supreme days of Darlington Hall are the terrifying days caused by the Nazis. Fascism, terrorism, and militarism destroyed the wholesomeness of the globe. Into blind Stevens, both Miss Kenton and Mr. Cardinal bring out the common issue, that is, the political conflicts caused by Nazis. Miss Kenton expresses her own ideas against anti-Semitism regarding the Jewish maids in the crucial time when Sir Oswald Mosley’s Blackshirts organization attacked the Anglo Jews between 1935 and 1936. Miss Kenton’s half-muted statement is reinforced and assured by Cardinal in his mention of the 1934 Olympic Rally threatened by Mosley’s Blackshirts. Cardinal’s invasion of Darlington Hall embodies the increasing tension of international politics related to World War II. Darlington Hall accommodates the crucial political issue that definitely influences the fate of the house, its residents, and the nation itself and that is retrieved in the newly progressive and established media.

Keywords: modern English literature, culture studies, communication, history

Procedia PDF Downloads 574
430 Design of a Backlight Hyperspectral Imaging System for Enhancing Image Quality in Artificial Vision Food Packaging Online Inspections

Authors: Ferran Paulí Pla, Pere Palacín Farré, Albert Fornells Herrera, Pol Toldrà Fernández

Abstract:

Poor image acquisition is limiting the promising growth of industrial vision in food control. In recent years, the food industry has witnessed a significant increase in the implementation of automation in quality control through artificial vision, a trend that continues to grow. During the packaging process, some defects may appear, compromising the proper sealing of the products and diminishing their shelf life, sanitary conditions and overall properties. While failure to detect a defective product leads to major losses, food producers also aim to minimize over-rejection to avoid unnecessary waste. Thus, accuracy in the evaluation of the products is crucial, and, given the large production volumes, even small improvements have a significant impact. Recently, efforts have been focused on maximizing the performance of classification neural networks; nevertheless, their performance is limited by the quality of the input data. Monochrome linear backlight systems are most commonly used for online inspections of food packaging thermo-sealing zones. These simple acquisition systems fit the high cadence of the production lines imposed by the market demand. Nevertheless, they provide a limited amount of data, which negatively impacts classification algorithm training. A desired situation would be one where data quality is maximized in terms of obtaining the key information to detect defects while maintaining a fast working pace. This work presents a backlight hyperspectral imaging system designed and implemented replicating an industrial environment to better understand the relationship between visual data quality and spectral illumination range for a variety of packed food products. Furthermore, results led to the identification of advantageous spectral bands that significantly enhance image quality, providing clearer detection of defects.

Keywords: artificial vision, food packaging, hyperspectral imaging, image acquisition, quality control

Procedia PDF Downloads 22
429 Quantum Information Scrambling and Quantum Chaos in Silicon-Based Fermi-Hubbard Quantum Dot Arrays

Authors: Nikolaos Petropoulos, Elena Blokhina, Andrii Sokolov, Andrii Semenov, Panagiotis Giounanlis, Xutong Wu, Dmytro Mishagli, Eugene Koskin, Robert Bogdan Staszewski, Dirk Leipold

Abstract:

We investigate entanglement and quantum information scrambling (QIS) by the example of a many-body Extended and spinless effective Fermi-Hubbard Model (EFHM and e-FHM, respectively) that describes a special type of quantum dot array provided by Equal1 labs silicon-based quantum computer. The concept of QIS is used in the framework of quantum information processing by quantum circuits and quantum channels. In general, QIS is manifest as the de-localization of quantum information over the entire quantum system; more compactly, information about the input cannot be obtained by local measurements of the output of the quantum system. In our work, we will first make an introduction to the concept of quantum information scrambling and its connection with the 4-point out-of-time-order (OTO) correlators. In order to have a quantitative measure of QIS we use the tripartite mutual information, in similar lines to previous works, that measures the mutual information between 4 different spacetime partitions of the system and study the Transverse Field Ising (TFI) model; this is used to quantify the dynamical spreading of quantum entanglement and information in the system. Then, we investigate scrambling in the quantum many-body Extended Hubbard Model with external magnetic field Bz and spin-spin coupling J for both uniform and thermal quantum channel inputs and show that it scrambles for specific external tuning parameters (e.g., tunneling amplitudes, on-site potentials, magnetic field). In addition, we compare different Hilbert space sizes (different number of qubits) and show the qualitative and quantitative differences in quantum scrambling as we increase the number of quantum degrees of freedom in the system. Moreover, we find a "scrambling phase transition" for a threshold temperature in the thermal case, that is, the temperature of the model that the channel starts to scramble quantum information. Finally, we make comparisons to the TFI model and highlight the key physical differences between the two systems and mention some future directions of research.

Keywords: condensed matter physics, quantum computing, quantum information theory, quantum physics

Procedia PDF Downloads 99
428 Filtering Momentum Life Cycles, Price Acceleration Signals and Trend Reversals for Stocks, Credit Derivatives and Bonds

Authors: Periklis Brakatsoulas

Abstract:

Recent empirical research shows a growing interest in investment decision-making under market anomalies that contradict the rational paradigm. Momentum is undoubtedly one of the most robust anomalies in the empirical asset pricing research and remains surprisingly lucrative ever since first documented. Although predominantly phenomena identified across equities, momentum premia are now evident across various asset classes. Yet few many attempts are made so far to provide traders a diversified portfolio of strategies across different assets and markets. Moreover, literature focuses on patterns from past returns rather than mechanisms to signal future price directions prior to momentum runs. The aim of this paper is to develop a diversified portfolio approach to price distortion signals using daily position data on stocks, credit derivatives, and bonds. An algorithm allocates assets periodically, and new investment tactics take over upon price momentum signals and across different ranking groups. We focus on momentum life cycles, trend reversals, and price acceleration signals. The main effort here concentrates on the density, time span and maturity of momentum phenomena to identify consistent patterns over time and measure the predictive power of buy-sell signals generated by these anomalies. To tackle this, we propose a two-stage modelling process. First, we generate forecasts on core macroeconomic drivers. Secondly, satellite models generate market risk forecasts using the core driver projections generated at the first stage as input. Moreover, using a combination of the ARFIMA and FIGARCH models, we examine the dependence of consecutive observations across time and portfolio assets since long memory behavior in volatilities of one market appears to trigger persistent volatility patterns across other markets. We believe that this is the first work that employs evidence of volatility transmissions among derivatives, equities, and bonds to identify momentum life cycle patterns.

Keywords: forecasting, long memory, momentum, returns

Procedia PDF Downloads 102
427 Comparison of Propofol versus Ketamine-Propofol Combination as an Anesthetic Agent in Supratentorial Tumors: A Randomized Controlled Study

Authors: Jakkireddy Sravani

Abstract:

Introduction: The maintenance of hemodynamic stability is of pivotal importance in supratentorial surgeries. Anesthesia for supratentorial tumors requires an understanding of localized or generalized rising ICP, regulation, and maintenance of intracerebral perfusion, and avoidance of secondary systemic ischemic insults. We aimed to compare the effects of the combination of ketamine and propofol with propofol alone when used as an induction and maintenance anesthetic agent during supratentorial tumors. Methodology: This prospective, randomized, double-blinded controlled study was conducted at AIIMS Raipur after obtaining the institute Ethics Committee approval (1212/IEC-AIIMSRPR/2022 dated 15/10/2022), CTRI/2023/01/049298 registration and written informed consent. Fifty-two supratentorial tumor patients posted for craniotomy and excision were included in the study. The patients were randomized into two groups. One group received a combination of ketamine and propofol, and the other group received propofol for induction and maintenance of anesthesia. Intraoperative hemodynamic stability and quality of brain relaxation were studied in both groups. Statistical analysis and technique: An MS Excel spreadsheet program was used to code and record the data. Data analysis was done using IBM Corp SPSS v23. The independent sample "t" test was applied for continuously dispersed data when two groups were compared, the chi-square test for categorical data, and the Wilcoxon test for not normally distributed data. Results: The patients were comparable in terms of demographic profile, duration of the surgery, and intraoperative input-output status. The trends in BIS over time were similar between the two groups (p-value = 1.00). Intraoperative hemodynamics (SBP, DBP, MAP) were better maintained in the ketamine and propofol combination group during induction and maintenance (p-value < 0.01). The quality of brain relaxation was comparable between the two groups (p-value = 0.364). Conclusion: Ketamine and propofol combination for the induction and maintenance of anesthesia was associated with superior hemodynamic stability, required fewer vasopressors during excision of supratentorial tumors, provided adequate brain relaxation, and some degree of neuroprotection compared to propofol alone.

Keywords: supratentorial tumors, hemodynamic stability, brain relaxation, ketamine, propofol

Procedia PDF Downloads 25
426 Suitability of Satellite-Based Data for Groundwater Modelling in Southwest Nigeria

Authors: O. O. Aiyelokun, O. A. Agbede

Abstract:

Numerical modelling of groundwater flow can be susceptible to calibration errors due to lack of adequate ground-based hydro-metrological stations in river basins. Groundwater resources management in Southwest Nigeria is currently challenged by overexploitation, lack of planning and monitoring, urbanization and climate change; hence to adopt models as decision support tools for sustainable management of groundwater; they must be adequately calibrated. Since river basins in Southwest Nigeria are characterized by missing data, and lack of adequate ground-based hydro-meteorological stations; the need for adopting satellite-based data for constructing distributed models is crucial. This study seeks to evaluate the suitability of satellite-based data as substitute for ground-based, for computing boundary conditions; by determining if ground and satellite based meteorological data fit well in Ogun and Oshun River basins. The Climate Forecast System Reanalysis (CFSR) global meteorological dataset was firstly obtained in daily form and converted to monthly form for the period of 432 months (January 1979 to June, 2014). Afterwards, ground-based meteorological data for Ikeja (1981-2010), Abeokuta (1983-2010), and Oshogbo (1981-2010) were compared with CFSR data using Goodness of Fit (GOF) statistics. The study revealed that based on mean absolute error (MEA), coefficient of correlation, (r) and coefficient of determination (R²); all meteorological variables except wind speed fit well. It was further revealed that maximum and minimum temperature, relative humidity and rainfall had high range of index of agreement (d) and ratio of standard deviation (rSD), implying that CFSR dataset could be used to compute boundary conditions such as groundwater recharge and potential evapotranspiration. The study concluded that satellite-based data such as the CFSR should be used as input when constructing groundwater flow models in river basins in Southwest Nigeria, where majority of the river basins are partially gaged and characterized with long missing hydro-metrological data.

Keywords: boundary condition, goodness of fit, groundwater, satellite-based data

Procedia PDF Downloads 130
425 Grammar as a Logic of Labeling: A Computer Model

Authors: Jacques Lamarche, Juhani Dickinson

Abstract:

This paper introduces a computational model of a Grammar as Logic of Labeling (GLL), where the lexical primitives of morphosyntax are phonological matrixes, the form of words, understood as labels that apply to realities (or targets) assumed to be outside of grammar altogether. The hypothesis is that even though a lexical label relates to its target arbitrarily, this label in a complex (constituent) label is part of a labeling pattern which, depending on its value (i.e., N, V, Adj, etc.), imposes language-specific restrictions on what it targets outside of grammar (in the world/semantics or in cognitive knowledge). Lexical forms categorized as nouns, verbs, adjectives, etc., are effectively targets of labeling patterns in use. The paper illustrates GLL through a computer model of basic patterns in English NPs. A constituent label is a binary object that encodes: i) alignment of input forms so that labels occurring at different points in time are understood as applying at once; ii) endocentric structuring - every grammatical constituent has a head label that determines the target of the constituent, and a limiter label (the non-head) that restricts this target. The N or A values are restricted to limiter label, the two differing in terms of alignment with a head. Consider the head initial DP ‘the dog’: the label ‘dog’ gets an N value because it is a limiter that is evenly aligned with the head ‘the’, restricting application of the DP. Adapting a traditional analysis of ‘the’ to GLL – apply label to something familiar – the DP targets and identifies one reality familiar to participants by applying to it the label ‘dog’ (singular). Consider next the DP ‘the large dog’: ‘large dog’ is nominal by even alignment with ‘the’, as before, and since ‘dog’ is the head of (head final) ‘large dog’, it is also nominal. The label ‘large’, however, is adjectival by narrow alignment with the head ‘dog’: it doesn’t target the head but targets a property of what dog applies to (a property or value of attribute). In other words, the internal composition of constituents determines that a form targets a property or a reality: ‘large’ and ‘dog’ happen to be valid targets to realize this constituent. In the presentation, the computer model of the analysis derives the 8 possible sequences of grammatical values with three labels after the determiner (the x y z): 1- D [ N [ N N ]]; 2- D [ A [ N N ] ]; 3- D [ N [ A N ] ]; 4- D [ A [ A N ] ]; 5- D [ [ N N ] N ]; 5- D [ [ A N ] N ]; 6- D [ [ N A ] N ] 7- [ [ N A ] N ] 8- D [ [ Adv A ] N ]. This approach that suggests that a computer model of these grammatical patterns could be used to construct ontologies/knowledge using speakers’ judgments about the validity of lexical meaning in grammatical patterns.

Keywords: syntactic theory, computational linguistics, logic and grammar, semantics, knowledge and grammar

Procedia PDF Downloads 38
424 Concepts of Instrumentation Scheme for Thought Transfer

Authors: Rai Sachindra Prasad

Abstract:

Thought is physical force. This has been well recognized but hardly translated visually or otherwise in the sense of its transfer from one individual to another. In the present world of chaos and disorder with yawning gaps between right and wrong thinking individuals, if it is possible to transfer the right thoughts to replace the wrong ones it would indeed be a great achievement in the present situation of the world which is torn with violence with dangerous thoughts of individuals. Moreover, such a possibility would completely remove the barrier of language between two persons, which at times proves to be a great obstacle in realizing a desired purpose. If a proper instrumentation scheme containing appropriate transducers and electronics is designed and implemented to realize this thought ransfer phenomenon, this would prove to be extremely useful when properly used. Considering the advancements already made in recording the nerve impulses in the brain, which are electrical events of very short durations that move along the axon, it is conceivable that this may be used to good effect in implementing the scheme. In such a proposition one shoud consider the roles played by pineal body, pituitary gland and ‘association’ areas. Pioneer students of brain have thought that associations or connections between sensory input and motor output were made in these areas. It is currently believed that rather than being regions of simple sensory-motor connections, the association areas process and integrate sensory information relayed to them from the primary sensory areas of the cortex and from the thalamus, after the information has been processed, it may be sent to motor areas to be acted upon. Again, even though the role played by pineal body is not known fully to neurologists its interconnection with pituitary gland is a matter of great significance to the ‘Rishis’ and; Seers’ s described in Vedas and Puranas- the ancient Holy books of Hindus. If the pineal body is activated through meditation it would control the pituitary gland thereby the individual’s thoughts and acts. Thus, if thoughts can be picked up by special transducers, these can be connected to suitable electronics circuitry to amplify the signals. These signals in the form of electromagnetic waves can then be transmitted using modems for long distance transmission and eventually received by or passed on to a subject of interest through another set of electronics circuit and devices.

Keywords: modems, pituitary gland, pineal body, thought transfer

Procedia PDF Downloads 372
423 Design, Construction and Evaluation of a Mechanical Vapor Compression Distillation System for Wastewater Treatment in a Poultry Company

Authors: Juan S. Vera, Miguel A. Gomez, Omar Gelvez

Abstract:

Water is Earth's most valuable resource, and the lack of it is currently a critical problem in today’s society. Non-treated wastewaters contribute to this situation, especially those coming from industrial activities, as they reduce the quality of the water bodies, annihilating all kind of life and bringing disease to people in contact with them. An effective solution for this problem is distillation, which removes most contaminants. However, this approach must also be energetically efficient in order to appeal to the industry. In this endeavour, most water distillation treatments fail, with the exception of the Mechanical Vapor Compression (MVC) distillation system, which has a great efficiency due to energy input by a compressor and the latent heat exchange. This paper presents the process of design, construction, and evaluation of a Mechanical Vapor Compression (MVC) distillation system for the main Colombian poultry company Avidesa Macpollo SA. The system will be located in the principal slaughterhouse in the state of Santander, and it will work along with the Gas Energy Mixing system (GEM) to treat the wastewaters from the plant. The main goal of the MVC distiller, rarely used in this type of application, is to reduce the chlorides, Chemical Oxygen Demand (COD) and Biological Oxygen Demand (BOD) levels according to the state regulations since the GEM cannot decrease them enough. The MVC distillation system works with three components, the evaporator/condenser heat exchanger where the distillation takes place, a low-pressure compressor which gives the energy to create the temperature differential between the evaporator and condenser cavities and a preheater to save the remaining energy in the distillate. The model equations used to describe how the compressor power consumption, heat exchange area and distilled water are related is based on a thermodynamic balance and heat transfer analysis, with correlations taken from the literature. Finally, the design calculations and the measurements of the installation are compared, showing accordance with the predictions in distillate production and power consumption, changing the temperature difference of the evaporator/condenser.

Keywords: mechanical vapor compression, distillation, wastewater, design, construction, evaluation

Procedia PDF Downloads 159
422 Hansen Solubility Parameters, Quality by Design Tool for Developing Green Nanoemulsion to Eliminate Sulfamethoxazole from Contaminated Water

Authors: Afzal Hussain, Mohammad A. Altamimi, Syed Sarim Imam, Mudassar Shahid, Osamah Abdulrahman Alnemer

Abstract:

Exhaustive application of sulfamethoxazole (SUX) became as a global threat for human health due to water contamination through diverse sources. The addressed combined application of Hansen solubility (HSPiP software) parameters and Quality by Design tool for developing various green nanoemulsions. HSPiP program assisted to screen suitable excipients based on Hansen solubility parameters and experimental solubility data. Various green nanoemulsions were prepared and characterized for globular size, size distribution, zeta potential, and removal efficiency. Design Expert (DoE) software further helped to identify critical factors responsible to have direct impact on percent removal efficiency, size, and viscosity. Morphological investigation was visualized under transmission electron microscopy (TEM). Finally, the treated was studied to negate the presence of the tested drug employing ICP-OES (inductively coupled plasma optical emission microscopy) technique and HPLC (high performance liquid chromatography). Results showed that HSPiP predicted biocompatible lipid, safe surfactant (lecithin), and propylene glycol (PG). Experimental solubility of the drug in the predicted excipients were quite convincing and vindicated. Various green nanoemulsions were fabricated, and these were evaluated for in vitro findings. Globular size (100-300 nm), PDI (0.1-0.5), zeta potential (~ 25 mV), and removal efficiency (%RE = 70-98%) were found to be in acceptable range for deciding input factors with level in DoE. Experimental design tool assisted to identify the most critical variables controlling %RE and optimized content of nanoemulsion under set constraints. Dispersion time was varied from 5-30 min. Finally, ICP-OES and HPLC techniques corroborated the absence of SUX in the treated water. Thus, the strategy is simple, economic, selective, and efficient.

Keywords: quality by design, sulfamethoxazole, green nanoemulsion, water treatment, icp-oes, hansen program (hspip software

Procedia PDF Downloads 82
421 Inertia Friction Pull Plug Welding, a New Weld Repair Technique of Aluminium Friction Stir Welding

Authors: Guoqing Wang, Yanhua Zhao, Lina Zhang, Jingbin Bai, Ruican Zhu

Abstract:

Friction stir welding with bobbin tool is a simple technique compared to conventional FSW since the backing fixture is no longer needed and assembling labor is reduced. It gets adopted more and more in the aerospace industry as a result. However, a post-weld problem, the left keyhole, has to be fixed by forced repair welding. To close the keyhole, the conventional fusion repair could be an option if the joint properties are not deteriorated; friction push plug welding, a forced repair, could be another except that a rigid support unit is demanded at the back of the weldment. Therefore, neither of the above ways is satisfaction in welding a large enclosed structure, like rocket propellant tank. Although friction pulls plug welding does not need a backing plate, the wide applications are still held back because of the disadvantages in respects of unappropriated tensile stress, (i.e. excessive stress causing neck shrinkage of plug that will bring about back defects while insufficient stress causing lack of heat input that will bring about face defects), complicated welding parameters (including rotation speed, transverse speed, friction force, welding pressure and upset),short welding time (approx. 0.5 sec.), narrow windows and poor stability of process. In this research, an updated technique called inertia friction pull plug welding, and its equipment was developed. The influencing rules of technological parameters on joint properties of inertia friction pull plug welding were observed. The microstructure characteristics were analyzed. Based on the elementary performance data acquired, the conclusion is made that the uniform energy provided by an inertia flywheel will be a guarantee to a stable welding process. Meanwhile, due to the abandon of backing plate, the inertia friction pull plug welding is considered as a promising technique in repairing keyhole of bobbin tool FSW and point type defects of aluminium base material.

Keywords: defect repairing, equipment, inertia friction pull plug welding, technological parameters

Procedia PDF Downloads 313
420 Comfort Sensor Using Fuzzy Logic and Arduino

Authors: Samuel John, S. Sharanya

Abstract:

Automation has become an important part of our life. It has been used to control home entertainment systems, changing the ambience of rooms for different events etc. One of the main parameters to control in a smart home is the atmospheric comfort. Atmospheric comfort mainly includes temperature and relative humidity. In homes, the desired temperature of different rooms varies from 20 °C to 25 °C and relative humidity is around 50%. However, it varies widely. Hence, automated measurement of these parameters to ensure comfort assumes significance. To achieve this, a fuzzy logic controller using Arduino was developed using MATLAB. Arduino is an open source hardware consisting of a 24 pin ATMEGA chip (atmega328), 14 digital input /output pins and an inbuilt ADC. It runs on 5v and 3.3v power supported by a board voltage regulator. Some of the digital pins in Aruduino provide PWM (pulse width modulation) signals, which can be used in different applications. The Arduino platform provides an integrated development environment, which includes support for c, c++ and java programming languages. In the present work, soft sensor was introduced in this system that can indirectly measure temperature and humidity and can be used for processing several measurements these to ensure comfort. The Sugeno method (output variables are functions or singleton/constant, more suitable for implementing on microcontrollers) was used in the soft sensor in MATLAB and then interfaced to the Arduino, which is again interfaced to the temperature and humidity sensor DHT11. The temperature-humidity sensor DHT11 acts as the sensing element in this system. Further, a capacitive humidity sensor and a thermistor were also used to support the measurement of temperature and relative humidity of the surrounding to provide a digital signal on the data pin. The comfort sensor developed was able to measure temperature and relative humidity correctly. The comfort percentage was calculated and accordingly the temperature in the room was controlled. This system was placed in different rooms of the house to ensure that it modifies the comfort values depending on temperature and relative humidity of the environment. Compared to the existing comfort control sensors, this system was found to provide an accurate comfort percentage. Depending on the comfort percentage, the air conditioners and the coolers in the room were controlled. The main highlight of the project is its cost efficiency.

Keywords: arduino, DHT11, soft sensor, sugeno

Procedia PDF Downloads 312
419 Fabrication of Highly Stable Low-Density Self-Assembled Monolayers by Thiolyne Click Reaction

Authors: Leila Safazadeh, Brad Berron

Abstract:

Self-assembled monolayers have tremendous impact in interfacial science, due to the unique opportunity they offer to tailor surface properties. Low-density self-assembled monolayers are an emerging class of monolayers where the environment-interfacing portion of the adsorbate has a greater level of conformational freedom when compared to traditional monolayer chemistries. This greater range of motion and increased spacing between surface-bound molecules offers new opportunities in tailoring adsorption phenomena in sensing systems. In particular, we expect low-density surfaces to offer a unique opportunity to intercalate surface bound ligands into the secondary structure of protiens and other macromolecules. Additionally, as many conventional sensing surfaces are built upon gold surfaces (SPR or QCM), these surfaces must be compatible with gold substrates. Here, we present the first stable method of generating low-density self assembled monolayer surfaces on gold for the analysis of their interactions with protein targets. Our approach is based on the 2:1 addition of thiol-yne chemistry to develop new classes of y-shaped adsorbates on gold, where the environment-interfacing group is spaced laterally from neighboring chemical groups. This technique involves an initial deposition of a crystalline monolayer of 1,10 decanedithiol on the gold substrate, followed by grafting of a low-packed monolayer on through a photoinitiated thiol-yne reaction in presence of light. Orthogonality of the thiol-yne chemistry (commonly referred to as a click chemistry) allows for preparation of low-density monolayers with variety of functional groups. To date, carboxyl, amine, alcohol, and alkyl terminated monolayers have been prepared using this core technology. Results from surface characterization techniques such as FTIR, contact angle goniometry and electrochemical impedance spectroscopy confirm the proposed low chain-chain interactions of the environment interfacing groups. Reductive desorption measurements suggest a higher stability for the click-LDMs compared to traditional SAMs, along with the equivalent packing density at the substrate interface, which confirms the proposed stability of the monolayer-gold interface. In addition, contact angle measurements change in the presence of an applied potential, supporting our description of a surface structure which allows the alkyl chains to freely orient themselves in response to different environments. We are studying the differences in protein adsorption phenomena between well packed and our loosely packed surfaces, and we expect this data will be ready to present at the GRC meeting. This work aims to contribute biotechnology science in the following manner: Molecularly imprinted polymers are a promising recognition mode with several advantages over natural antibodies in the recognition of small molecules. However, because of their bulk polymer structure, they are poorly suited for the rapid diffusion desired for recognition of proteins and other macromolecules. Molecularly imprinted monolayers are an emerging class of materials where the surface is imprinted, and there is not a bulk material to impede mass transfer. Further, the short distance between the binding site and the signal transduction material improves many modes of detection. My dissertation project is to develop a new chemistry for protein-imprinted self-assembled monolayers on gold, for incorporation into SPR sensors. Our unique contribution is the spatial imprinting of not only physical cues (seen in current imprinted monolayer techniques), but to also incorporate complementary chemical cues. This is accomplished through a photo-click grafting of preassembled ligands around a protein template. This conference is important for my development as a graduate student to broaden my appreciation of the sensor development beyond surface chemistry.

Keywords: low-density self-assembled monolayers, thiol-yne click reaction, molecular imprinting

Procedia PDF Downloads 226
418 The KAPSARC Energy Policy Database: Introducing a Quantified Library of China's Energy Policies

Authors: Philipp Galkin

Abstract:

Government policy is a critical factor in the understanding of energy markets. Regardless, it is rarely approached systematically from a research perspective. Gaining a precise understanding of what policies exist, their intended outcomes, geographical extent, duration, evolution, etc. would enable the research community to answer a variety of questions that, for now, are either oversimplified or ignored. Policy, on its surface, also seems a rather unstructured and qualitative undertaking. There may be quantitative components, but incorporating the concept of policy analysis into quantitative analysis remains a challenge. The KAPSARC Energy Policy Database (KEPD) is intended to address these two energy policy research limitations. Our approach is to represent policies within a quantitative library of the specific policy measures contained within a set of legal documents. Each of these measures is recorded into the database as a single entry characterized by a set of qualitative and quantitative attributes. Initially, we have focused on the major laws at the national level that regulate coal in China. However, KAPSARC is engaged in various efforts to apply this methodology to other energy policy domains. To ensure scalability and sustainability of our project, we are exploring semantic processing using automated computer algorithms. Automated coding can provide a more convenient input data for human coders and serve as a quality control option. Our initial findings suggest that the methodology utilized in KEPD could be applied to any set of energy policies. It also provides a convenient tool to facilitate understanding in the energy policy realm enabling the researcher to quickly identify, summarize, and digest policy documents and specific policy measures. The KEPD captures a wide range of information about each individual policy contained within a single policy document. This enables a variety of analyses, such as structural comparison of policy documents, tracing policy evolution, stakeholder analysis, and exploring interdependencies of policies and their attributes with exogenous datasets using statistical tools. The usability and broad range of research implications suggest a need for the continued expansion of the KEPD to encompass a larger scope of policy documents across geographies and energy sectors.

Keywords: China, energy policy, policy analysis, policy database

Procedia PDF Downloads 322
417 Dynamic Simulation of a Hybrid Wind Farm with Wind Turbines and Distributed Compressed Air Energy Storage System

Authors: Eronini Iheanyi Umez-Eronini

Abstract:

Most studies and existing implementations of compressed air energy storage (CAES) coupled with a wind farm to overcome intermittency and variability of wind power are based on bulk or centralized CAES plants. A dynamic model of a hybrid wind farm with wind turbines and distributed CAES, consisting of air storage tanks and compressor and expander trains at each wind turbine station, is developed and simulated in MATLAB. An ad hoc supervisory controller, in which the wind turbines are simply operated under classical power optimizing region control while scheduling power production by the expanders and air storage by the compressors, including modulation of the compressor power levels within a control range, is used to regulate overall farm power production to track minute-scale (3-minutes sampling period) TSO absolute power reference signal, over an eight-hour period. Simulation results for real wind data input with a simple wake field model applied to a hybrid plant composed of ten 5-MW wind turbines in a row and ten compatibly sized and configured Diabatic CAES stations show the plant controller is able to track the power demand signal within an error band size on the order of the electrical power rating of a single expander. This performance suggests that much improved results should be anticipated when the global D-CAES control is combined with power regulation for the individual wind turbines using available approaches for wind farm active power control. For standalone power plant fuel electrical efficiency estimate of up to 60%, the round trip electrical storage efficiency computed for the distributed CAES wherein heat generated by running compressors is utilized in the preheat stage of running high pressure expanders while fuel is introduced and combusted before the low pressure expanders, was comparable to reported round trip storage electrical efficiencies for bulk Adiabatic CAES.

Keywords: hybrid wind farm, distributed CAES, diabatic CAES, active power control, dynamic modeling and simulation

Procedia PDF Downloads 82
416 Investigation on Development of Pv and Wind Power with Hydro Pumped Storage to Increase Renewable Energy Penetration: A Parallel Analysis of Taiwan and Greece

Authors: Robel Habtemariam

Abstract:

Globally, wind energy and photovoltaics (PV) solar energy are among the leading renewable energy sources (RES) in terms of installed capacity. In order to increase the contribution of RES to the power supply system, large scale energy integration is required, mainly due to wind energy and PV. In this paper, an investigation has been made on the electrical power supply systems of Taiwan and Greece in order to integrate high level of wind and photovoltaic (PV) to increase the penetration of renewable energy resources. Currently, both countries heavily depend on fossil fuels to meet the demand and to generate adequate electricity. Therefore, this study is carried out to look into the two cases power supply system by developing a methodology that includes major power units. To address the analysis, an approach for simulation of power systems is formulated and applied. The simulation is based on the non-dynamic analysis of the electrical system. This simulation results in calculating the energy contribution of different types of power units; namely the wind, PV, non-flexible and flexible power units. The calculation is done for three different scenarios (2020, 2030, & 2050), where the first two scenarios are based on national targets and scenario 2050 is a reflection of ambitious global targets. By 2030 in Taiwan, the input of the power units is evaluated as 4.3% (wind), 3.7% (PV), 65.2 (non-flexible), 25.3% (flexible), and 1.5% belongs to hydropower plants. In Greece, much higher renewable energy contribution is observed for the same scenario with 21.7% (wind), 14.3% (PV), 38.7% (non-flexible), 14.9% (flexible), and 10.3% (hydro). Moreover, it examines the ability of the power systems to deal with the variable nature of the wind and PV generation. For this reason, an investigation has also been done on the use of the combined wind power with pumped storage systems (WPS) to enable the system to exploit the curtailed wind energy & surplus PV and thus increase the wind and PV installed capacity and replace the peak supply by conventional power units. Results show that the feasibility of pumped storage can be justified in the high scenario (that is the scenario of 2050) of RES integration especially in the case of Greece.

Keywords: large scale energy integration, photovoltaics solar energy, pumped storage systems, renewable energy sources

Procedia PDF Downloads 277
415 Pose-Dependency of Machine Tool Structures: Appearance, Consequences, and Challenges for Lightweight Large-Scale Machines

Authors: S. Apprich, F. Wulle, A. Lechler, A. Pott, A. Verl

Abstract:

Large-scale machine tools for the manufacturing of large work pieces, e.g. blades, casings or gears for wind turbines, feature pose-dependent dynamic behavior. Small structural damping coefficients lead to long decay times for structural vibrations that have negative impacts on the production process. Typically, these vibrations are handled by increasing the stiffness of the structure by adding mass. That is counterproductive to the needs of sustainable manufacturing as it leads to higher resource consumption both in material and in energy. Recent research activities have led to higher resource efficiency by radical mass reduction that rely on control-integrated active vibration avoidance and damping methods. These control methods depend on information describing the dynamic behavior of the controlled machine tools in order to tune the avoidance or reduction method parameters according to the current state of the machine. The paper presents the appearance, consequences and challenges of the pose-dependent dynamic behavior of lightweight large-scale machine tool structures in production. The paper starts with the theoretical introduction of the challenges of lightweight machine tool structures resulting from reduced stiffness. The statement of the pose-dependent dynamic behavior is corroborated by the results of the experimental modal analysis of a lightweight test structure. Afterwards, the consequences of the pose-dependent dynamic behavior of lightweight machine tool structures for the use of active control and vibration reduction methods are explained. Based on the state of the art on pose-dependent dynamic machine tool models and the modal investigation of an FE-model of the lightweight test structure, the criteria for a pose-dependent model for use in vibration reduction are derived. The description of the approach for a general pose-dependent model of the dynamic behavior of large lightweight machine tools that provides the necessary input to the aforementioned vibration avoidance and reduction methods to properly tackle machine vibrations is the outlook of the paper.

Keywords: dynamic behavior, lightweight, machine tool, pose-dependency

Procedia PDF Downloads 459
414 Submicron Laser-Induced Dot, Ripple and Wrinkle Structures and Their Applications

Authors: P. Slepicka, N. Slepickova Kasalkova, I. Michaljanicova, O. Nedela, Z. Kolska, V. Svorcik

Abstract:

Polymers exposed to laser or plasma treatment or modified with different wet methods which enable the introduction of nanoparticles or biologically active species, such as amino-acids, may find many applications both as biocompatible or anti-bacterial materials or on the contrary, can be applied for a decrease in the number of cells on the treated surface which opens application in single cell units. For the experiments, two types of materials were chosen, a representative of non-biodegradable polymers, polyethersulphone (PES) and polyhydroxybutyrate (PHB) as biodegradable material. Exposure of solid substrate to laser well below the ablation threshold can lead to formation of various surface structures. The ripples have a period roughly comparable to the wavelength of the incident laser radiation, and their dimensions depend on many factors, such as chemical composition of the polymer substrate, laser wavelength and the angle of incidence. On the contrary, biopolymers may significantly change their surface roughness and thus influence cell compatibility. The focus was on the surface treatment of PES and PHB by pulse excimer KrF laser with wavelength of 248 nm. The changes of physicochemical properties, surface morphology, surface chemistry and ablation of exposed polymers were studied both for PES and PHB. Several analytical methods involving atomic force microscopy, gravimetry, scanning electron microscopy and others were used for the analysis of the treated surface. It was found that the combination of certain input parameters leads not only to the formation of optimal narrow pattern, but to the combination of a ripple and a wrinkle-like structure, which could be an optimal candidate for cell attachment. The interaction of different types of cells and their interactions with the laser exposed surface were studied. It was found that laser treatment contributes as a major factor for wettability/contact angle change. The combination of optimal laser energy and pulse number was used for the construction of a surface with an anti-cellular response. Due to the simple laser treatment, we were able to prepare a biopolymer surface with higher roughness and thus significantly influence the area of growth of different types of cells (U-2 OS cells).

Keywords: cell response, excimer laser, polymer treatment, periodic pattern, surface morphology

Procedia PDF Downloads 236
413 Interactive Teaching and Learning Resources for Bilingual Education

Authors: Sarolta Lipóczi, Ildikó Szabó

Abstract:

The use of ICT in European Schools has increased over the last decade but there is still room for improvement. Also interactive technology is often used below its technical and pedagogical potentials. The pedagogical potential of interactive technology in classrooms has not yet reached classrooms in different countries and in a substantial way. To develop these materials cooperation between educational researchers and teachers from different backgrounds is necessary. INTACT project brings together experts from science education, mathematics education, social science education and foreign language education – with a focus on bilingual education – and teachers in secondary and primary schools to develop a variety of pedagogically qualitative interactive teaching and learning resources. Because of the backgrounds of the consortium members INTACT project focuses on the areas of science, mathematics and social sciences. To combine these two features (science/math and foreign language) the project focuses on bilingual education. A big issue supported by ‘interactiveness’ is social and collaborative learning. The easy way to communicate and collaborate offered by web 2.0 tools, mobile devices connected to the learning material allows students to work and learn together. There will be a wide range of possibilities for school co-operations at regional, national and also international level that allows students to communicate and cooperate with other students beyond the classroom boarders while using these interactive teaching materials. Opening up the learning scenario enhance the social, civic and cultural competences of the students by advocating their social skills and improving their cultural appreciation for other nations in Europe. To enable teachers to use the materials in indented ways descriptions of successful learning scenarios (i.e. using design patterns) will be provided as well. These materials and description will be made available to teachers by teacher trainings, teacher journals, booklets and online materials. The resources can also be used in different settings including the use of a projector and a touchpad or other technical interactive devices for the input i.e. mobile phones. Kecskemét College as a partner of INTACT project has developed two teaching and learning resources in the area of foreign language teaching. This article introduces these resources as well.

Keywords: bilingual educational settings, international cooperation, interactive teaching and learning resources, work across culture

Procedia PDF Downloads 395
412 Comparison of Existing Predictor and Development of Computational Method for S- Palmitoylation Site Identification in Arabidopsis Thaliana

Authors: Ayesha Sanjana Kawser Parsha

Abstract:

S-acylation is an irreversible bond in which cysteine residues are linked to fatty acids palmitate (74%) or stearate (22%), either at the COOH or NH2 terminal, via a thioester linkage. There are several experimental methods that can be used to identify the S-palmitoylation site; however, since they require a lot of time, computational methods are becoming increasingly necessary. There aren't many predictors, however, that can locate S- palmitoylation sites in Arabidopsis Thaliana with sufficient accuracy. This research is based on the importance of building a better prediction tool. To identify the type of machine learning algorithm that predicts this site more accurately for the experimental dataset, several prediction tools were examined in this research, including the GPS PALM 6.0, pCysMod, GPS LIPID 1.0, CSS PALM 4.0, and NBA PALM. These analyses were conducted by constructing the receiver operating characteristics plot and the area under the curve score. An AI-driven deep learning-based prediction tool has been developed utilizing the analysis and three sequence-based input data, such as the amino acid composition, binary encoding profile, and autocorrelation features. The model was developed using five layers, two activation functions, associated parameters, and hyperparameters. The model was built using various combinations of features, and after training and validation, it performed better when all the features were present while using the experimental dataset for 8 and 10-fold cross-validations. While testing the model with unseen and new data, such as the GPS PALM 6.0 plant and pCysMod mouse, the model performed better, and the area under the curve score was near 1. It can be demonstrated that this model outperforms the prior tools in predicting the S- palmitoylation site in the experimental data set by comparing the area under curve score of 10-fold cross-validation of the new model with the established tools' area under curve score with their respective training sets. The objective of this study is to develop a prediction tool for Arabidopsis Thaliana that is more accurate than current tools, as measured by the area under the curve score. Plant food production and immunological treatment targets can both be managed by utilizing this method to forecast S- palmitoylation sites.

Keywords: S- palmitoylation, ROC PLOT, area under the curve, cross- validation score

Procedia PDF Downloads 76
411 Inversely Designed Chipless Radio Frequency Identification (RFID) Tags Using Deep Learning

Authors: Madhawa Basnayaka, Jouni Paltakari

Abstract:

Fully passive backscattering chipless RFID tags are an emerging wireless technology with low cost, higher reading distance, and fast automatic identification without human interference, unlike already available technologies like optical barcodes. The design optimization of chipless RFID tags is crucial as it requires replacing integrated chips found in conventional RFID tags with printed geometric designs. These designs enable data encoding and decoding through backscattered electromagnetic (EM) signatures. The applications of chipless RFID tags have been limited due to the constraints of data encoding capacity and the ability to design accurate yet efficient configurations. The traditional approach to accomplishing design parameters for a desired EM response involves iterative adjustment of design parameters and simulating until the desired EM spectrum is achieved. However, traditional numerical simulation methods encounter limitations in optimizing design parameters efficiently due to the speed and resource consumption. In this work, a deep learning neural network (DNN) is utilized to establish a correlation between the EM spectrum and the dimensional parameters of nested centric rings, specifically square and octagonal. The proposed bi-directional DNN has two simultaneously running neural networks, namely spectrum prediction and design parameters prediction. First, spectrum prediction DNN was trained to minimize mean square error (MSE). After the training process was completed, the spectrum prediction DNN was able to accurately predict the EM spectrum according to the input design parameters within a few seconds. Then, the trained spectrum prediction DNN was connected to the design parameters prediction DNN and trained two networks simultaneously. For the first time in chipless tag design, design parameters were predicted accurately after training bi-directional DNN for a desired EM spectrum. The model was evaluated using a randomly generated spectrum and the tag was manufactured using the predicted geometrical parameters. The manufactured tags were successfully tested in the laboratory. The amount of iterative computer simulations has been significantly decreased by this approach. Therefore, highly efficient but ultrafast bi-directional DNN models allow rapid and complicated chipless RFID tag designs.

Keywords: artificial intelligence, chipless RFID, deep learning, machine learning

Procedia PDF Downloads 50
410 Feature Selection Approach for the Classification of Hydraulic Leakages in Hydraulic Final Inspection using Machine Learning

Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter

Abstract:

Manufacturing companies are facing global competition and enormous cost pressure. The use of machine learning applications can help reduce production costs and create added value. Predictive quality enables the securing of product quality through data-supported predictions using machine learning models as a basis for decisions on test results. Furthermore, machine learning methods are able to process large amounts of data, deal with unfavourable row-column ratios and detect dependencies between the covariates and the given target as well as assess the multidimensional influence of all input variables on the target. Real production data are often subject to highly fluctuating boundary conditions and unbalanced data sets. Changes in production data manifest themselves in trends, systematic shifts, and seasonal effects. Thus, Machine learning applications require intensive pre-processing and feature selection. Data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets. Within the used real data set of Bosch hydraulic valves, the comparability of the same production conditions in the production of hydraulic valves within certain time periods can be identified by applying the concept drift method. Furthermore, a classification model is developed to evaluate the feature importance in different subsets within the identified time periods. By selecting comparable and stable features, the number of features used can be significantly reduced without a strong decrease in predictive power. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. In this research, the ada boosting classifier is used to predict the leakage of hydraulic valves based on geometric gauge blocks from machining, mating data from the assembly, and hydraulic measurement data from end-of-line testing. In addition, the most suitable methods are selected and accurate quality predictions are achieved.

Keywords: classification, achine learning, predictive quality, feature selection

Procedia PDF Downloads 162
409 Yield Loss Estimation Using Multiple Drought Severity Indices

Authors: Sara Tokhi Arab, Rozo Noguchi, Tofeal Ahamed

Abstract:

Drought is a natural disaster that occurs in a region due to a lack of precipitation and high temperatures over a continuous period or in a single season as a consequence of climate change. Precipitation deficits and prolonged high temperatures mostly affect the agricultural sector, water resources, socioeconomics, and the environment. Consequently, it causes agricultural product loss, food shortage, famines, migration, and natural resources degradation in a region. Agriculture is the first sector affected by drought. Therefore, it is important to develop an agricultural drought risk and loss assessment to mitigate the drought impact in the agriculture sector. In this context, the main purpose of this study was to assess yield loss using composite drought indices in the drought-affected vineyards. In this study, the CDI was developed for the years 2016 to 2020 by comprising five indices: the vegetation condition index (VCI), temperature condition index (TCI), deviation of NDVI from the long-term mean (NDVI DEV), normalized difference moisture index (NDMI) and precipitation condition index (PCI). Moreover, the quantitative principal component analysis (PCA) approach was used to assign a weight for each input parameter, and then the weights of all the indices were combined into one composite drought index. Finally, Bayesian regularized artificial neural networks (BRANNs) were used to evaluate the yield variation in each affected vineyard. The composite drought index result indicated the moderate to severe droughts were observed across the Kabul Province during 2016 and 2018. Moreover, the results showed that there was no vineyard in extreme drought conditions. Therefore, we only considered the severe and moderated condition. According to the BRANNs results R=0.87 and R=0.94 in severe drought conditions for the years of 2016 and 2018 and the R= 0.85 and R=0.91 in moderate drought conditions for the years of 2016 and 2018, respectively. In the Kabul Province within the two years drought periods, there was a significate deficit in the vineyards. According to the findings, 2018 had the highest rate of loss almost -7 ton/ha. However, in 2016 the loss rates were about – 1.2 ton/ha. This research will support stakeholders to identify drought affect vineyards and support farmers during severe drought.

Keywords: grapes, composite drought index, yield loss, satellite remote sensing

Procedia PDF Downloads 157
408 Environmental Performance Improvement of Additive Manufacturing Processes with Part Quality Point of View

Authors: Mazyar Yosofi, Olivier Kerbrat, Pascal Mognol

Abstract:

Life cycle assessment of additive manufacturing processes has evolved significantly since these past years. A lot of existing studies mainly focused on energy consumption. Nowadays, new methodologies of life cycle inventory acquisition came through the literature and help manufacturers to take into account all the input and output flows during the manufacturing step of the life cycle of products. Indeed, the environmental analysis of the phenomena that occur during the manufacturing step of additive manufacturing processes is going to be well known. Now it becomes possible to count and measure accurately all the inventory data during the manufacturing step. Optimization of the environmental performances of processes can now be considered. Environmental performance improvement can be made by varying process parameters. However, a lot of these parameters (such as manufacturing speed, the power of the energy source, quantity of support materials) affect directly the mechanical properties, surface finish and the dimensional accuracy of a functional part. This study aims to improve the environmental performance of an additive manufacturing process without deterioration of the part quality. For that purpose, the authors have developed a generic method that has been applied on multiple parts made by additive manufacturing processes. First, a complete analysis of the process parameters is made in order to identify which parameters affect only the environmental performances of the process. Then, multiple parts are manufactured by varying the identified parameters. The aim of the second step is to find the optimum value of the parameters that decrease significantly the environmental impact of the process and keep the part quality as desired. Finally, a comparison between the part made by initials parameters and changed parameters is made. In this study, the major finding claims by authors is to reduce the environmental impact of an additive manufacturing process while respecting the three quality criterion of parts, mechanical properties, dimensional accuracy and surface roughness. Now that additive manufacturing processes can be seen as mature from a technical point of view, environmental improvement of these processes can be considered while respecting the part properties. The first part of this study presents the methodology applied to multiple academic parts. Then, the validity of the methodology is demonstrated on functional parts.

Keywords: additive manufacturing, environmental impact, environmental improvement, mechanical properties

Procedia PDF Downloads 288
407 Recommendations for Data Quality Filtering of Opportunistic Species Occurrence Data

Authors: Camille Van Eupen, Dirk Maes, Marc Herremans, Kristijn R. R. Swinnen, Ben Somers, Stijn Luca

Abstract:

In ecology, species distribution models are commonly implemented to study species-environment relationships. These models increasingly rely on opportunistic citizen science data when high-quality species records collected through standardized recording protocols are unavailable. While these opportunistic data are abundant, uncertainty is usually high, e.g., due to observer effects or a lack of metadata. Data quality filtering is often used to reduce these types of uncertainty in an attempt to increase the value of studies relying on opportunistic data. However, filtering should not be performed blindly. In this study, recommendations are built for data quality filtering of opportunistic species occurrence data that are used as input for species distribution models. Using an extensive database of 5.7 million citizen science records from 255 species in Flanders, the impact on model performance was quantified by applying three data quality filters, and these results were linked to species traits. More specifically, presence records were filtered based on record attributes that provide information on the observation process or post-entry data validation, and changes in the area under the receiver operating characteristic (AUC), sensitivity, and specificity were analyzed using the Maxent algorithm with and without filtering. Controlling for sample size enabled us to study the combined impact of data quality filtering, i.e., the simultaneous impact of an increase in data quality and a decrease in sample size. Further, the variation among species in their response to data quality filtering was explored by clustering species based on four traits often related to data quality: commonness, popularity, difficulty, and body size. Findings show that model performance is affected by i) the quality of the filtered data, ii) the proportional reduction in sample size caused by filtering and the remaining absolute sample size, and iii) a species ‘quality profile’, resulting from a species classification based on the four traits related to data quality. The findings resulted in recommendations on when and how to filter volunteer generated and opportunistically collected data. This study confirms that correctly processed citizen science data can make a valuable contribution to ecological research and species conservation.

Keywords: citizen science, data quality filtering, species distribution models, trait profiles

Procedia PDF Downloads 202