Search results for: microscopic techniques
5746 Task Validity in Neuroimaging Studies: Perspectives from Applied Linguistics
Authors: L. Freeborn
Abstract:
Recent years have seen an increasing number of neuroimaging studies related to language learning as imaging techniques such as fMRI and EEG have become more widely accessible to researchers. By using a variety of structural and functional neuroimaging techniques, these studies have already made considerable progress in terms of our understanding of neural networks and processing related to first and second language acquisition. However, the methodological designs employed in neuroimaging studies to test language learning have been questioned by applied linguists working within the field of second language acquisition (SLA). One of the major criticisms is that tasks designed to measure language learning gains rarely have a communicative function, and seldom assess learners’ ability to use the language in authentic situations. This brings the validity of many neuroimaging tasks into question. The fundamental reason why people learn a language is to communicate, and it is well-known that both first and second language proficiency are developed through meaningful social interaction. With this in mind, the SLA field is in agreement that second language acquisition and proficiency should be measured through learners’ ability to communicate in authentic real-life situations. Whilst authenticity is not always possible to achieve in a classroom environment, the importance of task authenticity should be reflected in the design of language assessments, teaching materials, and curricula. Tasks that bear little relation to how language is used in real-life situations can be considered to lack construct validity. This paper first describes the typical tasks used in neuroimaging studies to measure language gains and proficiency, then analyses to what extent these tasks can validly assess these constructs.Keywords: neuroimaging studies, research design, second language acquisition, task validity
Procedia PDF Downloads 1375745 Profiling Risky Code Using Machine Learning
Authors: Zunaira Zaman, David Bohannon
Abstract:
This study explores the application of machine learning (ML) for detecting security vulnerabilities in source code. The research aims to assist organizations with large application portfolios and limited security testing capabilities in prioritizing security activities. ML-based approaches offer benefits such as increased confidence scores, false positives and negatives tuning, and automated feedback. The initial approach using natural language processing techniques to extract features achieved 86% accuracy during the training phase but suffered from overfitting and performed poorly on unseen datasets during testing. To address these issues, the study proposes using the abstract syntax tree (AST) for Java and C++ codebases to capture code semantics and structure and generate path-context representations for each function. The Code2Vec model architecture is used to learn distributed representations of source code snippets for training a machine-learning classifier for vulnerability prediction. The study evaluates the performance of the proposed methodology using two datasets and compares the results with existing approaches. The Devign dataset yielded 60% accuracy in predicting vulnerable code snippets and helped resist overfitting, while the Juliet Test Suite predicted specific vulnerabilities such as OS-Command Injection, Cryptographic, and Cross-Site Scripting vulnerabilities. The Code2Vec model achieved 75% accuracy and a 98% recall rate in predicting OS-Command Injection vulnerabilities. The study concludes that even partial AST representations of source code can be useful for vulnerability prediction. The approach has the potential for automated intelligent analysis of source code, including vulnerability prediction on unseen source code. State-of-the-art models using natural language processing techniques and CNN models with ensemble modelling techniques did not generalize well on unseen data and faced overfitting issues. However, predicting vulnerabilities in source code using machine learning poses challenges such as high dimensionality and complexity of source code, imbalanced datasets, and identifying specific types of vulnerabilities. Future work will address these challenges and expand the scope of the research.Keywords: code embeddings, neural networks, natural language processing, OS command injection, software security, code properties
Procedia PDF Downloads 1055744 Extracting the Coupled Dynamics in Thin-Walled Beams from Numerical Data Bases
Authors: Mohammad A. Bani-Khaled
Abstract:
In this work we use the Discrete Proper Orthogonal Decomposition transform to characterize the properties of coupled dynamics in thin-walled beams by exploiting numerical simulations obtained from finite element simulations. The outcomes of the will improve our understanding of the linear and nonlinear coupled behavior of thin-walled beams structures. Thin-walled beams have widespread usage in modern engineering application in both large scale structures (aeronautical structures), as well as in nano-structures (nano-tubes). Therefore, detailed knowledge in regard to the properties of coupled vibrations and buckling in these structures are of great interest in the research community. Due to the geometric complexity in the overall structure and in particular in the cross-sections it is necessary to involve computational mechanics to numerically simulate the dynamics. In using numerical computational techniques, it is not necessary to over simplify a model in order to solve the equations of motions. Computational dynamics methods produce databases of controlled resolution in time and space. These numerical databases contain information on the properties of the coupled dynamics. In order to extract the system dynamic properties and strength of coupling among the various fields of the motion, processing techniques are required. Time- Proper Orthogonal Decomposition transform is a powerful tool for processing databases for the dynamics. It will be used to study the coupled dynamics of thin-walled basic structures. These structures are ideal to form a basis for a systematic study of coupled dynamics in structures of complex geometry.Keywords: coupled dynamics, geometric complexity, proper orthogonal decomposition (POD), thin walled beams
Procedia PDF Downloads 4185743 Effect of Post Circuit Resistance Exercise Glucose Feeding on Energy and Hormonal Indexes in Plasma and Lymphocyte in Free-Style Wrestlers
Authors: Miesam Golzadeh Gangraj, Younes Parvasi, Mohammad Ghasemi, Ahmad Abdi, Saeid Fazelifar
Abstract:
The purpose of the study was to determine the effect of glucose feeding on energy and hormonal indexes in plasma and lymphocyte immediately after wrestling – base techniques circuit exercise (WBTCE) in young male freestyle wrestlers. Sixteen wrestlers (weight = 75/45 ± 12/92 kg, age = 22/29 ± 0/90 years, BMI = 26/23 ± 2/64 kg/m²) were randomly divided into two groups: control (water), glucose (2 gr per kg body weight). Blood samples were obtained before, immediately, and 90 minutes of the post-exercise recovery period. Glucose (2 g/kg of body weight, 1W/5V) and water (equal volumes) solutions were given immediately after the second blood sampling. Data were analyzed by using an ANOVA (a repeated measure) and a suitable post hoc test (LSD). A significant decrease was observed in lymphocytes glycogen immediately after exercise (P < 0.001). In the experimental group, increase Lymphocyte glycogen concentration (P < 0.028) than in the control group in 90 min post-exercise. Plasma glucose concentrations increased in all groups immediately after exercise (P < 0.05). Plasma insulin concentrations in both groups decreased immediately after exercise, but at 90 min after exercise, its level was significantly increased only in glucose group (P < 0.001). Our results suggested that WBTCE protocol could be affected cellular energy sources and hormonal response. Furthermore, Glucose consumption can increase the lymphocyte glycogen and better energy within the cell.Keywords: glucose feeding, lymphocyte, Wrestling – base techniques circuit , exercise
Procedia PDF Downloads 2695742 Modeling and Simulation of Ship Structures Using Finite Element Method
Authors: Javid Iqbal, Zhu Shifan
Abstract:
The development in the construction of unconventional ships and the implementation of lightweight materials have shown a large impulse towards finite element (FE) method, making it a general tool for ship design. This paper briefly presents the modeling and analysis techniques of ship structures using FE method for complex boundary conditions which are difficult to analyze by existing Ship Classification Societies rules. During operation, all ships experience complex loading conditions. These loads are general categories into thermal loads, linear static, dynamic and non-linear loads. General strength of the ship structure is analyzed using static FE analysis. FE method is also suitable to consider the local loads generated by ballast tanks and cargo in addition to hydrostatic and hydrodynamic loads. Vibration analysis of a ship structure and its components can be performed using FE method which helps in obtaining the dynamic stability of the ship. FE method has developed better techniques for calculation of natural frequencies and different mode shapes of ship structure to avoid resonance both globally and locally. There is a lot of development towards the ideal design in ship industry over the past few years for solving complex engineering problems by employing the data stored in the FE model. This paper provides an overview of ship modeling methodology for FE analysis and its general application. Historical background, the basic concept of FE, advantages, and disadvantages of FE analysis are also reported along with examples related to hull strength and structural components.Keywords: dynamic analysis, finite element methods, ship structure, vibration analysis
Procedia PDF Downloads 1335741 Modelling Conceptual Quantities Using Support Vector Machines
Authors: Ka C. Lam, Oluwafunmibi S. Idowu
Abstract:
Uncertainty in cost is a major factor affecting performance of construction projects. To our knowledge, several conceptual cost models have been developed with varying degrees of accuracy. Incorporating conceptual quantities into conceptual cost models could improve the accuracy of early predesign cost estimates. Hence, the development of quantity models for estimating conceptual quantities of framed reinforced concrete structures using supervised machine learning is the aim of the current research. Using measured quantities of structural elements and design variables such as live loads and soil bearing pressures, response and predictor variables were defined and used for constructing conceptual quantities models. Twenty-four models were developed for comparison using a combination of non-parametric support vector regression, linear regression, and bootstrap resampling techniques. R programming language was used for data analysis and model implementation. Gross soil bearing pressure and gross floor loading were discovered to have a major influence on the quantities of concrete and reinforcement used for foundations. Building footprint and gross floor loading had a similar influence on beams and slabs. Future research could explore the modelling of other conceptual quantities for walls, finishes, and services using machine learning techniques. Estimation of conceptual quantities would assist construction planners in early resource planning and enable detailed performance evaluation of early cost predictions.Keywords: bootstrapping, conceptual quantities, modelling, reinforced concrete, support vector regression
Procedia PDF Downloads 2045740 Sustainable Production of Tin Oxide Nanoparticles: Exploring Synthesis Techniques, Formation Mechanisms, and Versatile Applications
Authors: Yemane Tadesse Gebreslassie, Henok Gidey Gebretnsae
Abstract:
Nanotechnology has emerged as a highly promising field of research with wide-ranging applications across various scientific disciplines. In recent years, tin oxide has garnered significant attention due to its intriguing properties, particularly when synthesized in the nanoscale range. While numerous physical and chemical methods exist for producing tin oxide nanoparticles, these approaches tend to be costly, energy-intensive, and involve the use of toxic chemicals. Given the growing concerns regarding human health and environmental impact, there has been a shift towards developing cost-effective and environmentally friendly processes for tin oxide nanoparticle synthesis. Green synthesis methods utilizing biological entities such as plant extracts, bacteria, and natural biomolecules have shown promise in successfully producing tin oxide nanoparticles. However, scaling up the production to an industrial level using green synthesis approaches remains challenging due to the complexity of biological substrates, which hinders the elucidation of reaction mechanisms and formation processes. Thus, this review aims to provide an overview of the various sources of biological entities and methodologies employed in the green synthesis of tin oxide nanoparticles, as well as their impact on nanoparticle properties. Furthermore, this research delves into the strides made in comprehending the mechanisms behind the formation of nanoparticles as documented in existing literature. It also sheds light on the array of analytical techniques employed to investigate and elucidate the characteristics of these minuscule particles.Keywords: nanotechnology, tin oxide, green synthesis, formation mechanisms
Procedia PDF Downloads 515739 Numerical Investigation of Pressure Drop and Erosion Wear by Computational Fluid Dynamics Simulation
Authors: Praveen Kumar, Nitin Kumar, Hemant Kumar
Abstract:
The modernization of computer technology and commercial computational fluid dynamic (CFD) simulation has given better detailed results as compared to experimental investigation techniques. CFD techniques are widely used in different field due to its flexibility and performance. Evaluation of pipeline erosion is complex phenomenon to solve by numerical arithmetic technique, whereas CFD simulation is an easy tool to resolve that type of problem. Erosion wear behaviour due to solid–liquid mixture in the slurry pipeline has been investigated using commercial CFD code in FLUENT. Multi-phase Euler-Lagrange model was adopted to predict the solid particle erosion wear in 22.5° pipe bend for the flow of bottom ash-water suspension. The present study addresses erosion prediction in three dimensional 22.5° pipe bend for two-phase (solid and liquid) flow using finite volume method with standard k-ε turbulence, discrete phase model and evaluation of erosion wear rate with varying velocity 2-4 m/s. The result shows that velocity of solid-liquid mixture found to be highly dominating parameter as compared to solid concentration, density, and particle size. At low velocity, settling takes place in the pipe bend due to low inertia and gravitational effect on solid particulate which leads to high erosion at bottom side of pipeline.Keywords: computational fluid dynamics (CFD), erosion, slurry transportation, k-ε Model
Procedia PDF Downloads 4055738 Design and Performance Improvement of Three-Dimensional Optical Code Division Multiple Access Networks with NAND Detection Technique
Authors: Satyasen Panda, Urmila Bhanja
Abstract:
In this paper, we have presented and analyzed three-dimensional (3-D) matrices of wavelength/time/space code for optical code division multiple access (OCDMA) networks with NAND subtraction detection technique. The 3-D codes are constructed by integrating a two-dimensional modified quadratic congruence (MQC) code with one-dimensional modified prime (MP) code. The respective encoders and decoders were designed using fiber Bragg gratings and optical delay lines to minimize the bit error rate (BER). The performance analysis of the 3D-OCDMA system is based on measurement of signal to noise ratio (SNR), BER and eye diagram for a different number of simultaneous users. Also, in the analysis, various types of noises and multiple access interference (MAI) effects were considered. The results obtained with NAND detection technique were compared with those obtained with OR and AND subtraction techniques. The comparison results proved that the NAND detection technique with 3-D MQC\MP code can accommodate more number of simultaneous users for longer distances of fiber with minimum BER as compared to OR and AND subtraction techniques. The received optical power is also measured at various levels of BER to analyze the effect of attenuation.Keywords: Cross Correlation (CC), Three dimensional Optical Code Division Multiple Access (3-D OCDMA), Spectral Amplitude Coding Optical Code Division Multiple Access (SAC-OCDMA), Multiple Access Interference (MAI), Phase Induced Intensity Noise (PIIN), Three Dimensional Modified Quadratic Congruence/Modified Prime (3-D MQC/MP) code
Procedia PDF Downloads 4125737 Improvement of Sleep Quality Through Manual and Non-Pharmacological Treatment
Authors: Andreas Aceranti, Sergio Romanò, Simonetta Vernocchi, Silvia Arnaboldi, Emilio Mazza
Abstract:
As a result of the Sars-Cov2 pandemic, the incidence of thymism disorders has significantly increased and, often, patients are reluctant to want to take drugs aimed at stabilizing mood. In order to provide an alternative approach to drug therapies, we have prepared a study in order to evaluate the possibility of improving the quality of life of these subjects through osteopathic treatment. Patients were divided into visceral and fascial manual treatment with the aim of increasing serotonin levels and stimulating the vagus nerve through validated techniques. The results were evaluated through the administration of targeted questionnaires in order to assess quality of life, mood, sleep and intestinal functioning. At a first endpoint we found, in patients undergoing fascial treatment, an increase in quality of life and sleep: in fact, they report a decrease in the number of nocturnal awakenings; a reduction in falling asleep times and greater rest upon waking. In contrast, patients undergoing visceral treatment, as well as those included in the control group, did not show significant improvements. Patients in the fascial group have, in fact, reported an improvement in thymism and subjective quality of life with a generalized improvement in function. Although the study is still ongoing, based on the results of the first endpoint we can hypothesize that fascial stimulation of the vagus nerve with manual and osteopathic techniques may be a valid alternative to pharmacological treatments in mood and sleep disorders.Keywords: ostheopathy, insomnia, noctural awakening, thymism
Procedia PDF Downloads 885736 Considerations upon Structural Health Monitoring of Small to Medium Wind Turbines
Authors: Nicolae Constantin, Ştefan Sorohan
Abstract:
The small and medium wind turbines are running in quite different conditions as compared to the big ones. Consequently, they need also a different approach concerning the structural health monitoring (SHM) issues. There are four main differences between the above mentioned categories: (i) significantly smaller dimensions, (ii) considerably higher rotation speed, (iii) generally small distance between the turbine and the energy consumer and (iv) monitoring assumed in many situations by the owner. In such conditions, nondestructive inspections (NDI) have to be made as much as possible with affordable, yet effective techniques, requiring portable and accessible equipment. Additionally, the turbines and accessories should be easy to mount, dispose and repair. As the materials used for such unit can be metals, composites and combined, the technologies should be adapted accordingly. An example in which the two materials co-exist is the situation in which the damaged metallic skin of a blade is repaired with a composite patch. The paper presents the inspection of the bonding state of the patch, using portable ultrasonic equipment, able to put in place the Lamb wave method, which proves efficient in global and local inspections as well. The equipment is relatively easy to handle and can be borrowed from specialized laboratories or used by a community of small wind turbine users, upon the case. This evaluation is the first in a row, aimed to evaluate efficiency of NDI performed with rather accessible, less sophisticated equipment and related inspection techniques, having field inspection capabilities. The main goal is to extend such inspection procedures to other components of the wind power unit, such as the support tower, water storage tanks, etc.Keywords: structural health monitoring, small wind turbines, non-destructive inspection, field inspection capabilities
Procedia PDF Downloads 3375735 A Literature Review on Emotion Recognition Using Wireless Body Area Network
Authors: Christodoulou Christos, Politis Anastasios
Abstract:
The utilization of Wireless Body Area Network (WBAN) is experiencing a notable surge in popularity as a result of its widespread implementation in the field of smart health. WBANs utilize small sensors implanted within the human body to monitor and record physiological indicators. These sensors transmit the collected data to hospitals and healthcare facilities through designated access points. Bio-sensors exhibit a diverse array of shapes and sizes, and their deployment can be tailored to the condition of the individual. Multiple sensors may be strategically placed within, on, or around the human body to effectively observe, record, and transmit essential physiological indicators. These measurements serve as a basis for subsequent analysis, evaluation, and therapeutic interventions. In conjunction with physical health concerns, numerous smartwatches are engineered to employ artificial intelligence techniques for the purpose of detecting mental health conditions such as depression and anxiety. The utilization of smartwatches serves as a secure and cost-effective solution for monitoring mental health. Physiological signals are widely regarded as a highly dependable method for the recognition of emotions due to the inherent inability of individuals to deliberately influence them over extended periods of time. The techniques that WBANs employ to recognize emotions are thoroughly examined in this article.Keywords: emotion recognition, wireless body area network, WBAN, ERC, wearable devices, psychological signals, emotion, smart-watch, prediction
Procedia PDF Downloads 495734 Testing of Protective Coatings on Automotive Steel, a Correlation Between Salt Spray, Electrochemical Impedance Spectroscopy, and Linear Polarization Resistance Test
Authors: Dhanashree Aole, V. Hariharan, Swati Surushe
Abstract:
Corrosion can cause serious and expensive damage to the automobile components. Various proven techniques for controlling and preventing corrosion depend on the specific material to be protected. Electrochemical Impedance Spectroscopy (EIS) and salt spray tests are commonly used to assess the corrosion degradation mechanism of coatings on metallic surfaces. While, the only test which monitors the corrosion rate in real time is known as Linear Polarisation Resistance (LPR). In this study, electrochemical tests (EIS & LPR) and spray test are reviewed to assess the corrosion resistance and durability of different coatings. The main objective of this study is to correlate the test results obtained using linear polarization resistance (LPR) and Electrochemical Impedance Spectroscopy (EIS) with the results obtained using standard salt spray test. Another objective of this work is to evaluate the performance of various coating systems- CED, Epoxy, Powder coating, Autophoretic, and Zn-trivalent coating for vehicle underbody application. The corrosion resistance coating are assessed. From this study, a promising correlation between different corrosion testing techniques is noted. The most profound observation is that electrochemical tests gives quick estimation of corrosion resistance and can detect the degradation of coatings well before visible signs of damage appear. Furthermore, the corrosion resistances and salt spray life of the coatings investigated were found to be according to the order as follows- CED> powder coating > Autophoretic > epoxy coating > Zn- Trivalent plating.Keywords: Linear Polarization Resistance (LPR), Electrochemical Impedance Spectroscopy (EIS), salt spray test, sacrificial and barrier coatings
Procedia PDF Downloads 5255733 Application of Interferometric Techniques for Quality Control Oils Used in the Food Industry
Authors: Andres Piña, Amy Meléndez, Pablo Cano, Tomas Cahuich
Abstract:
The purpose of this project is to propose a quick and environmentally friendly alternative to measure the quality of oils used in food industry. There is evidence that repeated and indiscriminate use of oils in food processing cause physicochemical changes with formation of potentially toxic compounds that can affect the health of consumers and cause organoleptic changes. In order to assess the quality of oils, non-destructive optical techniques such as Interferometry offer a rapid alternative to the use of reagents, using only the interaction of light on the oil. Through this project, we used interferograms of samples of oil placed under different heating conditions to establish the changes in their quality. These interferograms were obtained by means of a Mach-Zehnder Interferometer using a beam of light from a HeNe laser of 10mW at 632.8nm. Each interferogram was captured, analyzed and measured full width at half-maximum (FWHM) using the software from Amcap and ImageJ. The total of FWHMs was organized in three groups. It was observed that the average obtained from each of the FWHMs of group A shows a behavior that is almost linear, therefore it is probable that the exposure time is not relevant when the oil is kept under constant temperature. Group B exhibits a slight exponential model when temperature raises between 373 K and 393 K. Results of the t-Student show a probability of 95% (0.05) of the existence of variation in the molecular composition of both samples. Furthermore, we found a correlation between the Iodine Indexes (Physicochemical Analysis) and the Interferograms (Optical Analysis) of group C. Based on these results, this project highlights the importance of the quality of the oils used in food industry and shows how Interferometry can be a useful tool for this purpose.Keywords: food industry, interferometric, oils, quality control
Procedia PDF Downloads 3705732 Deep Learning for Qualitative and Quantitative Grain Quality Analysis Using Hyperspectral Imaging
Authors: Ole-Christian Galbo Engstrøm, Erik Schou Dreier, Birthe Møller Jespersen, Kim Steenstrup Pedersen
Abstract:
Grain quality analysis is a multi-parameterized problem that includes a variety of qualitative and quantitative parameters such as grain type classification, damage type classification, and nutrient regression. Currently, these parameters require human inspection, a multitude of instruments employing a variety of sensor technologies, and predictive model types or destructive and slow chemical analysis. This paper investigates the feasibility of applying near-infrared hyperspectral imaging (NIR-HSI) to grain quality analysis. For this study two datasets of NIR hyperspectral images in the wavelength range of 900 nm - 1700 nm have been used. Both datasets contain images of sparsely and densely packed grain kernels. The first dataset contains ~87,000 image crops of bulk wheat samples from 63 harvests where protein value has been determined by the FOSS Infratec NOVA which is the golden industry standard for protein content estimation in bulk samples of cereal grain. The second dataset consists of ~28,000 image crops of bulk grain kernels from seven different wheat varieties and a single rye variety. In the first dataset, protein regression analysis is the problem to solve while variety classification analysis is the problem to solve in the second dataset. Deep convolutional neural networks (CNNs) have the potential to utilize spatio-spectral correlations within a hyperspectral image to simultaneously estimate the qualitative and quantitative parameters. CNNs can autonomously derive meaningful representations of the input data reducing the need for advanced preprocessing techniques required for classical chemometric model types such as artificial neural networks (ANNs) and partial least-squares regression (PLS-R). A comparison between different CNN architectures utilizing 2D and 3D convolution is conducted. These results are compared to the performance of ANNs and PLS-R. Additionally, a variety of preprocessing techniques from image analysis and chemometrics are tested. These include centering, scaling, standard normal variate (SNV), Savitzky-Golay (SG) filtering, and detrending. The results indicate that the combination of NIR-HSI and CNNs has the potential to be the foundation for an automatic system unifying qualitative and quantitative grain quality analysis within a single sensor technology and predictive model type.Keywords: deep learning, grain analysis, hyperspectral imaging, preprocessing techniques
Procedia PDF Downloads 985731 Design and Development of Bioactive a-Hydroxy Carboxylate Group Modified MnFe₂O₄ Nanoparticle: Comparative Fluorescence Study, Magnetism and DNA Nuclease Activity
Authors: Indranil Chakraborty, Kalyan Mandal
Abstract:
Three new α-hydroxy carboxylate group functionalized MnFe₂O₄ nanoparticles (NPs) have been developed to explore the microscopic origin of ligand modified fluorescence and magnetic properties of nearly monodispersed MnFe₂O₄ NPs. The surface functionalization has been carried out with three small organic ligands (tartrate, malate, and citrate) having different number of α-hydroxy carboxylate functional group along with steric effect. Detailed study unveils that α-hydroxy carboxylate moiety of the ligands plays key role to generate intrinsic fluorescence in functionalized MnFe₂O₄ NPs through the activation of ligand to metal charge transfer transitions, associated with ligand-Mn²⁺/Fe³⁺ interactions along with d-d transition corresponding to d-orbital energy level splitting of Fe³⁺ ions on NP surface. Further, MnFe₂O₄ NPs show a maximum 140.88% increase in coercivity and 97.95% decrease in magnetization compared to its bare one upon functionalization. The ligands that induce smallest crystal field splitting of d-orbital energy level of transition metal ions are found to result in strongest ferromagnetic activation of the NPs. Finally, our developed tartrate functionalized MnFe₂O₄ (T-MnFe₂O₄) NPs have been utilized for studying DNA binding interaction and nuclease activity for stimulating their beneficial activities toward diverse biomedical applications. The spectroscopic measurements indicate that T-MnFe₂O₄ NPs bind calf thymus DNA by intercalative mode. The ability of T-MnFe₂O₄ NPs to induce DNA cleavage was studied by gel electrophoresis technique where the complex is found to promote the cleavage of pBR322 plasmid DNA from the super coiled form I to linear coiled form II and nicked coiled form III with good efficiency. This may be taken into account for designing new biomolecular detection agents and anti-cancer drug which can open up a new door toward diverse non-invasive biomedical applications.Keywords: MnFe₂O₄ nanoparticle, α-hydroxy carboxylic acid, comparative fluorescence, magnetism study, DNA interaction, nuclease activity
Procedia PDF Downloads 1365730 Integrated Free Space Optical Communication and Optical Sensor Network System with Artificial Intelligence Techniques
Authors: Yibeltal Chanie Manie, Zebider Asire Munyelet
Abstract:
5G and 6G technology offers enhanced quality of service with high data transmission rates, which necessitates the implementation of the Internet of Things (IoT) in 5G/6G architecture. In this paper, we proposed the integration of free space optical communication (FSO) with fiber sensor networks for IoT applications. Recently, free-space optical communications (FSO) are gaining popularity as an effective alternative technology to the limited availability of radio frequency (RF) spectrum. FSO is gaining popularity due to flexibility, high achievable optical bandwidth, and low power consumption in several applications of communications, such as disaster recovery, last-mile connectivity, drones, surveillance, backhaul, and satellite communications. Hence, high-speed FSO is an optimal choice for wireless networks to satisfy the full potential of 5G/6G technology, offering 100 Gbit/s or more speed in IoT applications. Moreover, machine learning must be integrated into the design, planning, and optimization of future optical wireless communication networks in order to actualize this vision of intelligent processing and operation. In addition, fiber sensors are important to achieve real-time, accurate, and smart monitoring in IoT applications. Moreover, we proposed deep learning techniques to estimate the strain changes and peak wavelength of multiple Fiber Bragg grating (FBG) sensors using only the spectrum of FBGs obtained from the real experiment.Keywords: optical sensor, artificial Intelligence, Internet of Things, free-space optics
Procedia PDF Downloads 635729 Association Type 1 Diabetes and Celiac Disease in Adult Patients
Authors: Soumaya Mrabet, Taieb Ach, Imen Akkari, Amira Atig, Neirouz Ghannouchi, Koussay Ach, Elhem Ben Jazia
Abstract:
Introduction: Celiac disease (CD) and type 1 diabetes mellitus (T1D) are complex disorders with shared genetic components. The association between CD and T1D has been reported in many pediatric series. The aim of our study is to describe the epidemiological, clinical and evolutive characteristics of adult patients presenting this association. Material and Methods: This is a retrospective study including patients diagnosed with CD and T1D, explored in Internal Medicine, Gastroenterology and Endocrinology and Diabetology Departments of the Farhat Hached University Hospital, between January 2005 and June 2016. Results: Among 57 patients with CD, 15 patients had also T1D (26.3%). There are 11 women and 4 men with a median age of 27 years (16-48). All patients developed T1D prior to the diagnosis of CD with an average duration of 47 months between the two diagnosis (6 months-5 years). CD was revealed by recurrent abdominal pain in 11 cases, diarrhea in 10 cases, bloating in 8 cases, constipation in 6 cases and vomiting in 2 cases. Three patients presented cycle disorders with secondary amenorrhea in 2 patients. Anti-Endomysium, anti-transglutaminase and Anti-gliadin antibodies were positive respectively in 57, 54 and 11 cases. The biological tests revealed anemia in 10 cases, secondary to iron deficiency in 6 cases and folate and vitamin B12 deficiency in 4 cases, hypoalbuminaemia in 4 cases, hypocalcemia in 3 cases and hypocholesterolemia in 1 patient. Upper gastrointestinal endoscopy showed an effacement of the folds of the duodenal mucosa in 6 cases and a congestive duodenal mucosa in 3 cases. The macroscopic appearance was normal in the others cases. Microscopic examination showed an aspect of villous atrophy in 57 cases, which was partial in 10 cases and total in 47 cases. After an average follow-up of 3 years 2 months, the evolution was favorable in all patients under gluten-free diet with the necessity of less important doses of insulin in 10 patients. Conclusion: In our study, the prevalence of T1D in adult patients with CD was 26.3%. This association can be attributed to overlapping genetic HLA risk loci. In recent studies, the role of gluten as an important player in the pathogenesis of CD and T1D has been also suggested.Keywords: celiac disease, gluten, prevalence, type 1 diabetes
Procedia PDF Downloads 2525728 Polymer Mediated Interaction between Grafted Nanosheets
Authors: Supriya Gupta, Paresh Chokshi
Abstract:
Polymer-particle interactions can be effectively utilized to produce composites that possess physicochemical properties superior to that of neat polymer. The incorporation of fillers with dimensions comparable to polymer chain size produces composites with extra-ordinary properties owing to very high surface to volume ratio. The dispersion of nanoparticles is achieved by inducing steric repulsion realized by grafting particles with polymeric chains. A comprehensive understanding of the interparticle interaction between these functionalized nanoparticles plays an important role in the synthesis of a stable polymer nanocomposite. With the focus on incorporation of clay sheets in a polymer matrix, we theoretically construct the polymer mediated interparticle potential for two nanosheets grafted with polymeric chains. The self-consistent field theory (SCFT) is employed to obtain the inhomogeneous composition field under equilibrium. Unlike the continuum models, SCFT is built from the microscopic description taking in to account the molecular interactions contributed by both intra- and inter-chain potentials. We present the results of SCFT calculations of the interaction potential curve for two grafted nanosheets immersed in the matrix of polymeric chains of dissimilar chemistry to that of the grafted chains. The interaction potential is repulsive at short separation and shows depletion attraction for moderate separations induced by high grafting density. It is found that the strength of attraction well can be tuned by altering the compatibility between the grafted and the mobile chains. Further, we construct the interaction potential between two nanosheets grafted with diblock copolymers with one of the blocks being chemically identical to the free polymeric chains. The interplay between the enthalpic interaction between the dissimilar species and the entropy of the free chains gives rise to a rich behavior in interaction potential curve obtained for two separate cases of free chains being chemically similar to either the grafted block or the free block of the grafted diblock chains.Keywords: clay nanosheets, polymer brush, polymer nanocomposites, self-consistent field theory
Procedia PDF Downloads 2515727 Identification of High-Rise Buildings Using Object Based Classification and Shadow Extraction Techniques
Authors: Subham Kharel, Sudha Ravindranath, A. Vidya, B. Chandrasekaran, K. Ganesha Raj, T. Shesadri
Abstract:
Digitization of urban features is a tedious and time-consuming process when done manually. In addition to this problem, Indian cities have complex habitat patterns and convoluted clustering patterns, which make it even more difficult to map features. This paper makes an attempt to classify urban objects in the satellite image using object-oriented classification techniques in which various classes such as vegetation, water bodies, buildings, and shadows adjacent to the buildings were mapped semi-automatically. Building layer obtained as a result of object-oriented classification along with already available building layers was used. The main focus, however, lay in the extraction of high-rise buildings using spatial technology, digital image processing, and modeling, which would otherwise be a very difficult task to carry out manually. Results indicated a considerable rise in the total number of buildings in the city. High-rise buildings were successfully mapped using satellite imagery, spatial technology along with logical reasoning and mathematical considerations. The results clearly depict the ability of Remote Sensing and GIS to solve complex problems in urban scenarios like studying urban sprawl and identification of more complex features in an urban area like high-rise buildings and multi-dwelling units. Object-Oriented Technique has been proven to be effective and has yielded an overall efficiency of 80 percent in the classification of high-rise buildings.Keywords: object oriented classification, shadow extraction, high-rise buildings, satellite imagery, spatial technology
Procedia PDF Downloads 1545726 System Identification of Timber Masonry Walls Using Shaking Table Test
Authors: Timir Baran Roy, Luis Guerreiro, Ashutosh Bagchi
Abstract:
Dynamic study is important in order to design, repair and rehabilitation of structures. It has played an important role in the behavior characterization of structures; such as bridges, dams, high-rise buildings etc. There had been a substantial development in this area over the last few decades, especially in the field of dynamic identification techniques of structural systems. Frequency Domain Decomposition (FDD) and Time Domain Decomposition are most commonly used methods to identify modal parameters; such as natural frequency, modal damping, and mode shape. The focus of the present research is to study the dynamic characteristics of typical timber masonry walls commonly used in Portugal. For that purpose, a multi-storey structural prototypes of such walls have been tested on a seismic shake table at the National Laboratory for Civil Engineering, Portugal (LNEC). Signal processing has been performed of the output response, which is collected from the shaking table experiment of the prototype using accelerometers. In the present work signal processing of the output response, based on the input response has been done in two ways: FDD and Stochastic Subspace Identification (SSI). In order to estimate the values of the modal parameters, algorithms for FDD are formulated, and parametric functions for the SSI are computed. Finally, estimated values from both the methods are compared to measure the accuracy of both the techniques.Keywords: frequency domain decomposition (fdd), modal parameters, signal processing, stochastic subspace identification (ssi), time domain decomposition
Procedia PDF Downloads 2635725 Developing Oral Communication Competence in a Second Language: The Communicative Approach
Authors: Ikechi Gilbert
Abstract:
Oral communication is the transmission of ideas or messages through the speech process. Acquiring competence in this area which, by its volatile nature, is prone to errors and inaccuracies would require the adoption of a well-suited teaching methodology. Efficient oral communication facilitates exchange of ideas and easy accomplishment of day-to-day tasks, by means of a demonstrated mastery of oral expression and the making of fine presentations to audiences or individuals while recognizing verbal signals and body language of others and interpreting them correctly. In Anglophone states such as Nigeria, Ghana, etc., the French language, for instance, is studied as a foreign language, being used majorly in teaching learners who have their own mother tongue different from French. The same applies to Francophone states where English is studied as a foreign language by people whose official language or mother tongue is different from English. The ideal approach would be to teach these languages in these environments through a pedagogical approach that properly takes care of the oral perspective for effective understanding and application by the learners. In this article, we are examining the communicative approach as a methodology for teaching oral communication in a foreign language. This method is a direct response to the communicative needs of the learner involving the use of appropriate materials and teaching techniques that meet those needs. It is also a vivid improvement to the traditional grammatical and audio-visual adaptations. Our contribution will focus on the pedagogical component of oral communication improvement, highlighting its merits and also proposing diverse techniques including aspects of information and communication technology that would assist the second language learner communicate better orally.Keywords: communication, competence, methodology, pedagogical component
Procedia PDF Downloads 2645724 Parameter Identification Analysis in the Design of Rock Fill Dams
Authors: G. Shahzadi, A. Soulaimani
Abstract:
This research work aims to identify the physical parameters of the constitutive soil model in the design of a rockfill dam by inverse analysis. The best parameters of the constitutive soil model, are those that minimize the objective function, defined as the difference between the measured and numerical results. The Finite Element code (Plaxis) has been utilized for numerical simulation. Polynomial and neural network-based response surfaces have been generated to analyze the relationship between soil parameters and displacements. The performance of surrogate models has been analyzed and compared by evaluating the root mean square error. A comparative study has been done based on objective functions and optimization techniques. Objective functions are categorized by considering measured data with and without uncertainty in instruments, defined by the least square method, which estimates the norm between the predicted displacements and the measured values. Hydro Quebec provided data sets for the measured values of the Romaine-2 dam. Stochastic optimization, an approach that can overcome local minima, and solve non-convex and non-differentiable problems with ease, is used to obtain an optimum value. Genetic Algorithm (GA), Particle Swarm Optimization (PSO) and Differential Evolution (DE) are compared for the minimization problem, although all these techniques take time to converge to an optimum value; however, PSO provided the better convergence and best soil parameters. Overall, parameter identification analysis could be effectively used for the rockfill dam application and has the potential to become a valuable tool for geotechnical engineers for assessing dam performance and dam safety.Keywords: Rockfill dam, parameter identification, stochastic analysis, regression, PLAXIS
Procedia PDF Downloads 1455723 Interpretation and Prediction of Geotechnical Soil Parameters Using Ensemble Machine Learning
Authors: Goudjil kamel, Boukhatem Ghania, Jlailia Djihene
Abstract:
This paper delves into the development of a sophisticated desktop application designed to calculate soil bearing capacity and predict limit pressure. Drawing from an extensive review of existing methodologies, the study meticulously examines various approaches employed in soil bearing capacity calculations, elucidating their theoretical foundations and practical applications. Furthermore, the study explores the burgeoning intersection of artificial intelligence (AI) and geotechnical engineering, underscoring the transformative potential of AI- driven solutions in enhancing predictive accuracy and efficiency.Central to the research is the utilization of cutting-edge machine learning techniques, including Artificial Neural Networks (ANN), XGBoost, and Random Forest, for predictive modeling. Through comprehensive experimentation and rigorous analysis, the efficacy and performance of each method are rigorously evaluated, with XGBoost emerging as the preeminent algorithm, showcasing superior predictive capabilities compared to its counterparts. The study culminates in a nuanced understanding of the intricate dynamics at play in geotechnical analysis, offering valuable insights into optimizing soil bearing capacity calculations and limit pressure predictions. By harnessing the power of advanced computational techniques and AI-driven algorithms, the paper presents a paradigm shift in the realm of geotechnical engineering, promising enhanced precision and reliability in civil engineering projects.Keywords: limit pressure of soil, xgboost, random forest, bearing capacity
Procedia PDF Downloads 205722 Amelioration of Lipopolysaccharide Induced Murine Colitis by Cell Wall Contents of Probiotic Lactobacillus Casei: Targeting Immuno-Inflammation and Oxidative Stress
Authors: Vishvas N. Patel, Mehul Chorawala
Abstract:
Currently, according to the authors best knowledge there are less effective therapeutic agents to limit intestinal mucosa damage associated with inflammatory bowel disease (IBD). Clinical studies have shown beneficial effects of several probiotics in patients of IBD. Probiotics are live organisms; confer a health benefit to the host by modulating immunoinflammation and oxidative stress. Although probiotics in murine and human improve disease severity, very little is known about the specific contribution of cell wall contents of probiotics in IBD. Herein, we investigated the ameliorative potential of cell wall contents of Lactobacillus casei (LC) in lipopolysaccharide (LPS)-induced murine colitis. Methods: Colitis was induced in LPS-sensitized rats by intracolonic instillation of LPS (50 µg/rat) for consecutive 14 days. Concurrently, cell wall contents isolated from 103, 106 and 109 CFU of LC was given subcutaneously to each rat for 21 days, considering sulfasalazine (100 mg/kg, p.o.) as standard. The severity of colitis was assessed by body weight loss, food intake, stool consistency, rectal bleeding, colon weight/length, spleen weight and histological analysis. Colonic inflammatory markers (myeloperoxidase (MPO) activity, C-reactive protein and proinflammatory cytokines) and oxidative stress markers (malondialdehyde, reduced glutathione and nitric oxide) were also assayed. Results: Cell wall contents of isolated from 106 and 109 CFU of LC significantly improved the severity of colitis by reducing body weight loss and diarrhea & bleeding incidence, improving food intake, colon weight/length, spleen weight and microscopic damage to the colonic mucosa. The treatment also reduced levels of inflammatory and oxidative stress markers and boosted antioxidant molecule. However, cell wall contents of isolated from 103 were ineffective. Conclusion: In conclusion, cell wall contents of LC attenuate LPS-induced colitis by modulating immuno-inflammation and oxidative stress.Keywords: probiotics, Lactobacillus casei, immuno-inflammation, oxidative stress, lipopolysaccharide, colitis
Procedia PDF Downloads 865721 Comparison of Bioelectric and Biomechanical Electromyography Normalization Techniques in Disparate Populations
Authors: Drew Commandeur, Ryan Brodie, Sandra Hundza, Marc Klimstra
Abstract:
The amplitude of raw electromyography (EMG) is affected by recording conditions and often requires normalization to make meaningful comparisons. Bioelectric methods normalize with an EMG signal recorded during a standardized task or from the experimental protocol itself, while biomechanical methods often involve measurements with an additional sensor such as a force transducer. Common bioelectric normalization techniques for treadmill walking include maximum voluntary isometric contraction (MVIC), dynamic EMG peak (EMGPeak) or dynamic EMG mean (EMGMean). There are several concerns with using MVICs to normalize EMG, including poor reliability and potential discomfort. A limitation of bioelectric normalization techniques is that they could result in a misrepresentation of the absolute magnitude of force generated by the muscle and impact the interpretation of EMG between functionally disparate groups. Additionally, methods that normalize to EMG recorded during the task may eliminate some real inter-individual variability due to biological variation. This study compared biomechanical and bioelectric EMG normalization techniques during treadmill walking to assess the impact of the normalization method on the functional interpretation of EMG data. For the biomechanical method, we normalized EMG to a target torque (EMGTS) and the bioelectric methods used were normalization to the mean and peak of the signal during the walking task (EMGMean and EMGPeak). The effect of normalization on muscle activation pattern, EMG amplitude, and inter-individual variability were compared between disparate cohorts of OLD (76.6 yrs N=11) and YOUNG (26.6 yrs N=11) adults. Participants walked on a treadmill at a self-selected pace while EMG was recorded from the right lower limb. EMG data from the soleus (SOL), medial gastrocnemius (MG), tibialis anterior (TA), vastus lateralis (VL), and biceps femoris (BF) were phase averaged into 16 bins (phases) representing the gait cycle with bins 1-10 associated with right stance and bins 11-16 with right swing. Pearson’s correlations showed that activation patterns across the gait cycle were similar between all methods, ranging from r =0.86 to r=1.00 with p<0.05. This indicates that each method can characterize the muscle activation pattern during walking. Repeated measures ANOVA showed a main effect for age in MG for EMGPeak but no other main effects were observed. Interactions between age*phase of EMG amplitude between YOUNG and OLD with each method resulted in different statistical interpretation between methods. EMGTS normalization characterized the fewest differences (four phases across all 5 muscles) while EMGMean (11 phases) and EMGPeak (19 phases) showed considerably more differences between cohorts. The second notable finding was that coefficient of variation, the representation of inter-individual variability, was greatest for EMGTS and lowest for EMGMean while EMGPeak was slightly higher than EMGMean for all muscles. This finding supports our expectation that EMGTS normalization would retain inter-individual variability which may be desirable, however, it also suggests that even when large differences are expected, a larger sample size may be required to observe the differences. Our findings clearly indicate that interpretation of EMG is highly dependent on the normalization method used, and it is essential to consider the strengths and limitations of each method when drawing conclusions.Keywords: electromyography, EMG normalization, functional EMG, older adults
Procedia PDF Downloads 915720 Clinical Trial of VEUPLEXᵀᴹ TBI Assay to Help Diagnose Traumatic Brain Injury by Quantifying Glial Fibrillary Acidic Protein and Ubiquitin Carboxy-Terminal Hydrolase L1 in the Serum of Patients Suspected of Mild TBI by Fluorescence Immunoassay
Authors: Moon Jung Kim, Guil Rhim
Abstract:
The clinical sensitivity of the “VEUPLEXTM TBI assay”, a clinical trial medical device, in mild traumatic brain injury was 28.6% (95% CI, 19.7%-37.5%), and the clinical specificity was 94.0% (95% CI, 89.3%). -98.7%). In addition, when the results analyzed by marker were put together, the sensitivity was higher when interpreting the two tests together than the two tests, UCHL1 and GFAP alone. Additionally, when sensitivity and specificity were analyzed based on CT results for the mild traumatic brain injury patient group, the clinical sensitivity for 2 CT-positive cases was 50.0% (95% CI: 1.3%-98.7%), and 19 CT-negative cases. The clinical specificity for cases was 68.4% (95% CI: 43.5% - 87.4%). Since the low clinical sensitivity for the two CT-positive cases was not statistically significant due to the small number of samples analyzed, it was judged necessary to secure and analyze more samples in the future. Regarding the clinical specificity analysis results for 19 CT-negative cases, there were a large number of patients who were actually clinically diagnosed with mild traumatic brain injury but actually received a CT-negative result, and about 31.6% of them showed abnormal results on VEUPLEXTM TBI assay. Although traumatic brain injury could not be detected in 31.6% of the CT scans, the possibility of actually suffering a mild brain injury could not be ruled out, so it was judged that this could be confirmed through follow-up observation of the patient. In addition, among patients with mild traumatic brain injury, CT examinations were not performed in many cases because the symptoms were very mild, but among these patients, about 25% or more showed abnormal results in the VEUPLEXTM TBI assay. In fact, no damage is observed with the naked eye immediately after traumatic brain injury, and traumatic brain injury is not observed even on CT. But in some cases, brain hemorrhage may occur (delayed cerebral hemorrhage) after a certain period of time, so the patients who did show abnormal results on VEUPLEXTM TBI assay should be followed up for the delayed cerebral hemorrhage. In conclusion, it was judged that it was difficult to judge mild traumatic brain injury with the VEUPLEXTM TBI assay only through clinical findings without CT results, that is, based on the GCS value. Even in the case of CT, it does not detect all mild traumatic brain injury, so it is difficult to necessarily judge that there is no traumatic brain injury, even if there is no evidence of traumatic brain injury in CT. And in the long term, more patients should be included to evaluate the usefulness of the VEUPLEXTM TBI assay in the detection of microscopic traumatic brain injuries without using CT.Keywords: brain injury, traumatic brain injury, GFAP, UCHL1
Procedia PDF Downloads 985719 Critical Approach to Define the Architectural Structure of a Health Prototype in a Rural Area of Brazil
Authors: Domenico Chizzoniti, Monica Moscatelli, Letizia Cattani, Luca Preis
Abstract:
A primary healthcare facility in developing countries should be a multifunctional space able to respond to different requirements: Flexibility, modularity, aggregation and reversibility. These basic features could be better satisfied if applied to an architectural artifact that complies with the typological, figurative and constructive aspects of the context in which it is located. Therefore, the purpose of this paper is to identify a procedure that can define the figurative aspects of the architectural structure of the health prototype for the marginal areas of developing countries through a critical approach. The application context is the rural areas of the Northeast of Bahia in Brazil. The prototype should be located in the rural district of Quingoma, in the municipality of Lauro de Freitas, a particular place where there is still a cultural fusion of black and indigenous populations. Based on the historical analysis of settlement strategies and architectural structures in spaces of public interest or collective use, this paper aims to provide a procedure able to identify the categories and rules underlying typological and figurative aspects, in order to detect significant and generalizable elements, as well as materials and constructive techniques typically adopted in the rural areas of Brazil. The object of this work is therefore not only the recovery of certain constructive approaches but also the development of a procedure that integrates the requirements of the primary healthcare prototype with its surrounding economic, social, cultural, settlement and figurative conditions.Keywords: architectural typology, developing countries, local construction techniques, primary health care.
Procedia PDF Downloads 3225718 Modeling of Large Elasto-Plastic Deformations by the Coupled FE-EFGM
Authors: Azher Jameel, Ghulam Ashraf Harmain
Abstract:
In the recent years, the enriched techniques like the extended finite element method, the element free Galerkin method, and the Coupled finite element-element free Galerkin method have found wide application in modeling different types of discontinuities produced by cracks, contact surfaces, and bi-material interfaces. The extended finite element method faces severe mesh distortion issues while modeling large deformation problems. The element free Galerkin method does not have mesh distortion issues, but it is computationally more demanding than the finite element method. The coupled FE-EFGM proves to be an efficient numerical tool for modeling large deformation problems as it exploits the advantages of both FEM and EFGM. The present paper employs the coupled FE-EFGM to model large elastoplastic deformations in bi-material engineering components. The large deformation occurring in the domain has been modeled by using the total Lagrangian approach. The non-linear elastoplastic behavior of the material has been represented by the Ramberg-Osgood model. The elastic predictor-plastic corrector algorithms are used for the evaluation stresses during large deformation. Finally, several numerical problems are solved by the coupled FE-EFGM to illustrate its applicability, efficiency and accuracy in modeling large elastoplastic deformations in bi-material samples. The results obtained by the proposed technique are compared with the results obtained by XFEM and EFGM. A remarkable agreement was observed between the results obtained by the three techniques.Keywords: XFEM, EFGM, coupled FE-EFGM, level sets, large deformation
Procedia PDF Downloads 4465717 Biomass Energy: "The Boon for the Would"
Authors: Shubham Giri Goswami, Yogesh Tiwari
Abstract:
In today’s developing world, India and other countries are developing different instruments and accessories for the better standard and life to be happy and prosper. But rather than this we human-beings have been using different energy sources accordingly, many persons such as scientist, researchers etc have developed many Energy sources like renewable and non-renewable energy sources. Like fossil fuel, coal, gas, petroleum products as non-renewable sources, and solar, wind energy as renewable energy source. Thus all non-renewable energy sources, these all Created pollution as in form of air, water etc. due to ultimate use of these sources by human the future became uncertain. Thus to minimize all this environmental affects and destroy the healthy environment we discovered a solution as renewable energy source. Renewable energy source in form of biomass energy, solar, wind etc. We found different techniques in biomass energy, that good energy source for people. The domestic waste, and is a good source of energy as daily extract from cow in form of dung and many other domestic products naturally can be used eco-friendly fertilizers. Moreover, as from my point of view the cow is able to extract 08-12 kg of dung which can be used to make wormy compost fertilizers. Furthermore, the calf urine as insecticides and use of such a compounds will lead to destroy insects and thus decrease communicable diseases. Therefore, can be used by every person and biomass energy can be in those areas such as rural areas where non-renewable energy sources cannot reach easily. Biomass can be used to develop fertilizers, cow-dung plants and other power generation techniques, and this energy is clean and pollution free and is available everywhere thus saves our beautiful planet or blue or life giving planet called as “EARTH”. We can use the biomass energy, which may be boon for the world in future.Keywords: biomass, energy, environment, human, pollution, renewable, solar energy, sources, wind
Procedia PDF Downloads 524