Search results for: Signal Processing
898 A Thermo-mechanical Finite Element Model to Predict Thermal Cycles and Residual Stresses in Directed Energy Deposition Technology
Authors: Edison A. Bonifaz
Abstract:
In this work, a numerical procedure is proposed to design dense multi-material structures using the Directed Energy Deposition (DED) process. A thermo-mechanical finite element model to predict thermal cycles and residual stresses is presented. A numerical layer build-up procedure coupled with a moving heat flux was constructed to minimize strains and residual stresses that result in the multi-layer deposition of an AISI 316 austenitic steel on an AISI 304 austenitic steel substrate. To simulate the DED process, the automated interface of the ABAQUS AM module was used to define element activation and heat input event data as a function of time and position. Of this manner, the construction of ABAQUS user-defined subroutines was not necessary. Thermal cycles and thermally induced stresses created during the multi-layer deposition metal AM pool crystallization were predicted and validated. Results were analyzed in three independent metal layers of three different experiments. The one-way heat and material deposition toolpath used in the analysis was created with a MatLab path script. An optimal combination of feedstock and heat input printing parameters suitable for fabricating multi-material dense structures in the directed energy deposition metal AM process was established. At constant power, it can be concluded that the lower the heat input, the lower the peak temperatures and residual stresses. It means that from a design point of view, the one-way heat and material deposition processing toolpath with the higher welding speed should be selected.Keywords: event series, thermal cycles, residual stresses, multi-pass welding, abaqus am modeler
Procedia PDF Downloads 69897 Short Text Classification Using Part of Speech Feature to Analyze Students' Feedback of Assessment Components
Authors: Zainab Mutlaq Ibrahim, Mohamed Bader-El-Den, Mihaela Cocea
Abstract:
Students' textual feedback can hold unique patterns and useful information about learning process, it can hold information about advantages and disadvantages of teaching methods, assessment components, facilities, and other aspects of teaching. The results of analysing such a feedback can form a key point for institutions’ decision makers to advance and update their systems accordingly. This paper proposes a data mining framework for analysing end of unit general textual feedback using part of speech feature (PoS) with four machine learning algorithms: support vector machines, decision tree, random forest, and naive bays. The proposed framework has two tasks: first, to use the above algorithms to build an optimal model that automatically classifies the whole data set into two subsets, one subset is tailored to assessment practices (assessment related), and the other one is the non-assessment related data. Second task to use the same algorithms to build an optimal model for whole data set, and the new data subsets to automatically detect their sentiment. The significance of this paper is to compare the performance of the above four algorithms using part of speech feature to the performance of the same algorithms using n-grams feature. The paper follows Knowledge Discovery and Data Mining (KDDM) framework to construct the classification and sentiment analysis models, which is understanding the assessment domain, cleaning and pre-processing the data set, selecting and running the data mining algorithm, interpreting mined patterns, and consolidating the discovered knowledge. The results of this paper experiments show that both models which used both features performed very well regarding first task. But regarding the second task, models that used part of speech feature has underperformed in comparison with models that used unigrams and bigrams.Keywords: assessment, part of speech, sentiment analysis, student feedback
Procedia PDF Downloads 142896 Olive-Mill Wastewater and Organo-Mineral Fertlizers Application for the Control of Parasitic Weed Phelipanche ramosa L. Pomel in Tomato
Authors: Grazia Disciglio, Francesco Lops, Annalisa Tarantino, Emanuele Tarantino
Abstract:
The parasitic weed specie Phelipanche ramosa (L) Pomel is one of the major constraints in tomato crop in Apulia region (southern Italy). The experimental was considered to investigate the effect of six organic compounds (Olive miller wastewater, Allil isothiocyanate®, Alfa plus K®, Radicon®, Rizosum Max®, Kendal Nem®) on the naturally infested field of tomato growing season in 2016. The randomized block design with 3 replicates was adopted. Tomato seedling were transplant on 19 May 2016. During the growing cycle of the tomato at 74, 81, 93 and 103 days after transplantation (DAT), the number of parasitic shoots (branched plants) that had emerged in each plot was determined. At harvesting on 13 September 2016 the major quanti-qualitative yield parameters were determined, including marketable yield, mean weight, dry matter, soluble solids, fruit colour, pH and titratable acidity. The treatments provided the results show that none of treatments provided complete control against P. ramosa. However, among the products tested Olive miller wastewater, Alfa plus K®, Rizosum Max® and Kendal Nem® products applied to the soil show the number of emerged shoots significantly lower than Radicon® and especially than the Allil isothiocyanate® treatment and the untreated control. Regarding the effect of different treatments on the tomato productive parameters, the marketable yield resulted significantly higher in the same mentioned treatments which gave the lower P. ramosa infestation. No significative differences for the other fruit characteristics were observed.Keywords: processing tomato crop, Phelipanche ramosa, olive-mill wastewater, organic fertilizers
Procedia PDF Downloads 325895 Integrated Decision Support for Energy/Water Planning in Zayandeh Rud River Basin in Iran
Authors: Safieh Javadinejad
Abstract:
In order to make well-informed decisions respecting long-term system planning, resource managers and policy creators necessitate to comprehend the interconnections among energy and water utilization and manufacture—and also the energy-water nexus. Planning and assessment issues contain the enhancement of strategies for declining the water and energy system’s vulnerabilities to climate alteration with also emissions of decreasing greenhouse gas. In order to deliver beneficial decision support for climate adjustment policy and planning, understanding the regionally-specific features of the energy-water nexus, and the history-future of the water and energy source systems serving is essential. It will be helpful for decision makers understand the nature of current water-energy system conditions and capacity for adaptation plans for future. This research shows an integrated hydrology/energy modeling platform which is able to extend water-energy examines based on a detailed illustration of local circumstances. The modeling links the Water Evaluation and Planning (WEAP) and the Long Range Energy Alternatives Planning (LEAP) system to create full picture of water-energy processes. This will allow water managers and policy-decision makers to simply understand links between energy system improvements and hydrological processing and realize how future climate change will effect on water-energy systems. The Zayandeh Rud river basin in Iran is selected as a case study to show the results and application of the analysis. This region is known as an area with large integration of both the electric power and water sectors. The linkages between water, energy and climate change and possible adaptation strategies are described along with early insights from applications of the integration modeling system.Keywords: climate impacts, hydrology, water systems, adaptation planning, electricity, integrated modeling
Procedia PDF Downloads 292894 Nanofluidic Cell for Resolution Improvement of Liquid Transmission Electron Microscopy
Authors: Deybith Venegas-Rojas, Sercan Keskin, Svenja Riekeberg, Sana Azim, Stephanie Manz, R. J. Dwayne Miller, Hoc Khiem Trieu
Abstract:
Liquid Transmission Electron Microscopy (TEM) is a growing area with a broad range of applications from physics and chemistry to material engineering and biology, in which it is possible to image in-situ unseen phenomena. For this, a nanofluidic device is used to insert the nanoflow with the sample inside the microscope in order to keep the liquid encapsulated because of the high vacuum. In the last years, Si3N4 windows have been widely used because of its mechanical stability and low imaging contrast. Nevertheless, the pressure difference between the inside fluid and the outside vacuum in the TEM generates bulging in the windows. This increases the imaged fluid volume, which decreases the signal to noise ratio (SNR), limiting the achievable spatial resolution. With the proposed device, the membrane is fortified with a microstructure capable of stand higher pressure differences, and almost removing completely the bulging. A theoretical study is presented with Finite Element Method (FEM) simulations which provide a deep understanding of the membrane mechanical conditions and proves the effectiveness of this novel concept. Bulging and von Mises Stress were studied for different membrane dimensions, geometries, materials, and thicknesses. The microfabrication of the device was made with a thin wafer coated with thin layers of SiO2 and Si3N4. After the lithography process, these layers were etched (reactive ion etching and buffered oxide etch (BOE) respectively). After that, the microstructure was etched (deep reactive ion etching). Then the back side SiO2 was etched (BOE) and the array of free-standing micro-windows was obtained. Additionally, a Pyrex wafer was patterned with windows, and inlets/outlets, and bonded (anodic bonding) to the Si side to facilitate the thin wafer handling. Later, a thin spacer is sputtered and patterned with microchannels and trenches to guide the nanoflow with the samples. This approach reduces considerably the common bulging problem of the window, improving the SNR, contrast and spatial resolution, increasing substantially the mechanical stability of the windows, allowing a larger viewing area. These developments lead to a wider range of applications of liquid TEM, expanding the spectrum of possible experiments in the field.Keywords: liquid cell, liquid transmission electron microscopy, nanofluidics, nanofluidic cell, thin films
Procedia PDF Downloads 255893 Nanowire Substrate to Control Differentiation of Mesenchymal Stem Cells
Authors: Ainur Sharip, Jose E. Perez, Nouf Alsharif, Aldo I. M. Bandeas, Enzo D. Fabrizio, Timothy Ravasi, Jasmeen S. Merzaban, Jürgen Kosel
Abstract:
Bone marrow-derived human mesenchymal stem cells (MSCs) are attractive candidates for tissue engineering and regenerative medicine, due to their ability to differentiate into osteoblasts, chondrocytes or adipocytes. Differentiation is influenced by biochemical and biophysical stimuli provided by the microenvironment of the cell. Thus, altering the mechanical characteristics of a cell culture scaffold can directly influence a cell’s microenvironment and lead to stem cell differentiation. Mesenchymal stem cells were cultured on densely packed, vertically aligned magnetic iron nanowires (NWs) and the effect of NWs on the cell cytoskeleton rearrangement and differentiation were studied. An electrochemical deposition method was employed to fabricate NWs into nanoporous alumina templates, followed by a partial release to reveal the NW array. This created a cell growth substrate with free-standing NWs. The Fe NWs possessed a length of 2-3 µm, with each NW having a diameter of 33 nm on average. Mechanical stimuli generated by the physical movement of these iron NWs, in response to a magnetic field, can stimulate osteogenic differentiation. Induction of osteogenesis was estimated using an osteogenic marker, osteopontin, and a reduction of stem cell markers, CD73 and CD105. MSCs were grown on the NWs, and fluorescent microscopy was employed to monitor the expression of markers. A magnetic field with an intensity of 250 mT and a frequency of 0.1 Hz was applied for 12 hours/day over a period of one week and two weeks. The magnetically activated substrate enhanced the osteogenic differentiation of the MSCs compared to the culture conditions without magnetic field. Quantification of the osteopontin signal revealed approximately a seven-fold increase in the expression of this protein after two weeks of culture. Immunostaining staining against CD73 and CD105 revealed the expression of antibodies at the earlier time point (two days) and a considerable reduction after one-week exposure to a magnetic field. Overall, these results demonstrate the application of a magnetic NW substrate in stimulating the osteogenic differentiation of MSCs. This method significantly decreases the time needed to induce osteogenic differentiation compared to commercial biochemical methods, such as osteogenic differentiation kits, that usually require more than two weeks. Contact-free stimulation of MSC differentiation using a magnetic field has potential uses in tissue engineering, regenerative medicine, and bone formation therapies.Keywords: cell substrate, magnetic nanowire, mesenchymal stem cell, stem cell differentiation
Procedia PDF Downloads 196892 AI-Driven Solutions for Optimizing Master Data Management
Authors: Srinivas Vangari
Abstract:
In the era of big data, ensuring the accuracy, consistency, and reliability of critical data assets is crucial for data-driven enterprises. Master Data Management (MDM) plays a crucial role in this endeavor. This paper investigates the role of Artificial Intelligence (AI) in enhancing MDM, focusing on how AI-driven solutions can automate and optimize various stages of the master data lifecycle. By integrating AI (Quantitative and Qualitative Analysis) into processes such as data creation, maintenance, enrichment, and usage, organizations can achieve significant improvements in data quality and operational efficiency. Quantitative analysis is employed to measure the impact of AI on key metrics, including data accuracy, processing speed, and error reduction. For instance, our study demonstrates an 18% improvement in data accuracy and a 75% reduction in duplicate records across multiple systems post-AI implementation. Furthermore, AI’s predictive maintenance capabilities reduced data obsolescence by 22%, as indicated by statistical analyses of data usage patterns over a 12-month period. Complementing this, a qualitative analysis delves into the specific AI-driven strategies that enhance MDM practices, such as automating data entry and validation, which resulted in a 28% decrease in manual errors. Insights from case studies highlight how AI-driven data cleansing processes reduced inconsistencies by 25% and how AI-powered enrichment strategies improved data relevance by 24%, thus boosting decision-making accuracy. The findings demonstrate that AI significantly enhances data quality and integrity, leading to improved enterprise performance through cost reduction, increased compliance, and more accurate, real-time decision-making. These insights underscore the value of AI as a critical tool in modern data management strategies, offering a competitive edge to organizations that leverage its capabilities.Keywords: artificial intelligence, master data management, data governance, data quality
Procedia PDF Downloads 18891 Effect of Microstructure and Texture of Magnesium Alloy Due to Addition of Pb
Authors: Yebeen Ji, Jimin Yun, Kwonhoo Kim
Abstract:
Magnesium alloys were limited for industrial applications due to having a limited slip system and high plastic anisotropy. It has been known that specific textures were formed during processing (rolling, etc.), and These textures cause poor formability. To solve these problems, many researchers have studied controlling texture by adding rare-earth elements. However, the high cost limits their use; therefore, alternatives are needed to replace them. Although Pb addition doesn’t directly improve magnesium properties, it has been known to suppress the diffusion of other alloying elements and reduce grain boundary energy. These characteristics are similar to the additions of rare-earth elements, and a similar texture behavior is expected as well. However, there is insufficient research on this. Therefore, this study investigates the behavior of texture and microstructure development after adding Pb to magnesium. This study compared and analyzed AZ61 alloy and Mg-15wt%Pb alloy to determine the effect of adding solute elements. The alloy was hot rolled and annealed to form a single phase and initial texture. Afterward, the specimen was set to contraction and elongate parallel to the rolling surface and the rolling direction and then subjected to high-temperature plane strain compression under the conditions of 723K and 0.05/s. Microstructural analysis and texture measurements were performed by SEM-EBSD. The peak stress in the true strain-stress curve after compression was higher in AZ61, but the shape of the flow curve was similar for both alloys. For both alloys, continuous dynamic recrystallization was confirmed to occur during the compression process. The basal texture developed parallel to the compressed surface, and the pole density was lower in the Mg-15wt%Pb alloy. It is confirmed that this change in behavior is because the orientation distribution of recrystallized grains has a more random orientation compared to the parent grains when Pb is added.Keywords: Mg, texture, Pb, DRX
Procedia PDF Downloads 49890 Minimizing the Drilling-Induced Damage in Fiber Reinforced Polymeric Composites
Authors: S. D. El Wakil, M. Pladsen
Abstract:
Fiber reinforced polymeric (FRP) composites are finding wide-spread industrial applications because of their exceptionally high specific strength and specific modulus of elasticity. Nevertheless, it is very seldom to get ready-for-use components or products made of FRP composites. Secondary processing by machining, particularly drilling, is almost always required to make holes for fastening components together to produce assemblies. That creates problems since the FRP composites are neither homogeneous nor isotropic. Some of the problems that are encountered include the subsequent damage in the region around the drilled hole and the drilling – induced delamination of the layer of ply, that occurs both at the entrance and the exit planes of the work piece. Evidently, the functionality of the work piece would be detrimentally affected. The current work was carried out with the aim of eliminating or at least minimizing the work piece damage associated with drilling of FPR composites. Each test specimen involves a woven reinforced graphite fiber/epoxy composite having a thickness of 12.5 mm (0.5 inch). A large number of test specimens were subjected to drilling operations with different combinations of feed rates and cutting speeds. The drilling induced damage was taken as the absolute value of the difference between the drilled hole diameter and the nominal one taken as a percentage of the nominal diameter. The later was determined for each combination of feed rate and cutting speed, and a matrix comprising those values was established, where the columns indicate varying feed rate while and rows indicate varying cutting speeds. Next, the analysis of variance (ANOVA) approach was employed using Minitab software, in order to obtain the combination that would improve the drilling induced damage. Experimental results show that low feed rates coupled with low cutting speeds yielded the best results.Keywords: drilling of composites, dimensional accuracy of holes drilled in composites, delamination and charring, graphite-epoxy composites
Procedia PDF Downloads 390889 Optimizing Energy Efficiency: Leveraging Big Data Analytics and AWS Services for Buildings and Industries
Authors: Gaurav Kumar Sinha
Abstract:
In an era marked by increasing concerns about energy sustainability, this research endeavors to address the pressing challenge of energy consumption in buildings and industries. This study delves into the transformative potential of AWS services in optimizing energy efficiency. The research is founded on the recognition that effective management of energy consumption is imperative for both environmental conservation and economic viability. Buildings and industries account for a substantial portion of global energy use, making it crucial to develop advanced techniques for analysis and reduction. This study sets out to explore the integration of AWS services with big data analytics to provide innovative solutions for energy consumption analysis. Leveraging AWS's cloud computing capabilities, scalable infrastructure, and data analytics tools, the research aims to develop efficient methods for collecting, processing, and analyzing energy data from diverse sources. The core focus is on creating predictive models and real-time monitoring systems that enable proactive energy management. By harnessing AWS's machine learning and data analytics capabilities, the research seeks to identify patterns, anomalies, and optimization opportunities within energy consumption data. Furthermore, this study aims to propose actionable recommendations for reducing energy consumption in buildings and industries. By combining AWS services with metrics-driven insights, the research strives to facilitate the implementation of energy-efficient practices, ultimately leading to reduced carbon emissions and cost savings. The integration of AWS services not only enhances the analytical capabilities but also offers scalable solutions that can be customized for different building and industrial contexts. The research also recognizes the potential for AWS-powered solutions to promote sustainable practices and support environmental stewardship.Keywords: energy consumption analysis, big data analytics, AWS services, energy efficiency
Procedia PDF Downloads 64888 Influence of Glenohumeral Joint Approximation Technique on the Cardiovascular System in the Acute Phase after Stroke
Authors: Iva Hereitova, Miroslav Svatek, Vit Novacek
Abstract:
Background and Aim: Autonomic imbalance is one of the complications for immobilized patients in the acute stage after a stroke. The predominance of sympathetic activity significantly increases cardiac activity. The technique of glenohumeral joint approximation may contribute in a non-pharmacological way to the regulation of blood pressure and heart rate in patients in this risk group. The aim of the study was to evaluate the effect of glenohumeral joint approximation on the change in heart rate and blood pressure in immobilized patients in the acute phase after a stroke. Methods: The experimental study bilaterally evaluated heart rate, systolic and diastolic pressure values before and after glenohumeral joint approximation in 40 immobilized participants (72.6 ± 10.2 years) in the acute phase after stroke. The experimental group was compared with 40 healthy participants in the control group (68.6 ± 14.2 years). An SpO2 vital signs monitor and a validated Microlife WatchBP Office blood pressure monitor were used for evaluation. Statistical processing and evaluation were performed in MATLAB R2019 (The Math Works®, Inc., Natick, MA, USA). Results: Approximation of the glenohumeral joint resulted in a statistically significant decrease in systolic and diastolic pressure. An average decrease in systolic pressure for individual groups ranged from 8.2 to 11.3 mmHg (p <0.001). For diastolic pressure, the average decrease ranged from 5.0 - 14.2 mmHg (p <0.001). There was a statistically significant reduction in heart rate (p <0.01) only in patients after ischemic stroke in the inferior cerebral artery. There was the average decrease in heart rate of 3.9 beats per minute (median 4 beats per minute). Conclusion: Approximation of the glenohumeral joint leads to a statistically significant decrease in systolic and diastolic pressure in immobilized patients in the acute phase after stroke.Keywords: Aproximation technique, Cardiovaskular system, Glenohumeral joint, Stroke
Procedia PDF Downloads 216887 The Effect of Restaurant Residuals on Performance of Japanese Quail
Authors: A. A. Saki, Y. Karimi, H. J. Najafabadi, P. Zamani, Z. Mostafaie
Abstract:
The restaurant residuals reasons such as competition between human and animal consumption of cereals, increasing environmental pollution and the high cost of production of livestock products is important. Therefore, in this restaurant residuals have a high nutritional value (protein and high energy) that it is possible can replace some of the poultry diets are especially Japanese quail. Today, the challenges of processing and consumption of these lesions occurring in modern industry would be confronting. Increasing costs, pressures, and problems associated with waste excretion, the need for re-evaluation and utilization of waste to livestock and poultry feed fortifies. This study aimed to investigate the effects of different levels of restaurant residuals on performance of 300 layer Japanese quails. This experiment included 5 treatments, 4 replicates, and 15 quails in each from 10 to 18 weeks age in a completely randomized design (CRD). The treatments consist of basal diet including corn and soybean meal (without residual restaurants), and treatments 2, 3, 4 and 5, includes a basal diet containing 5, 10, 15 and 20% of restaurant residuals, respectively. There were no significant effect of restaurant residuals levels on body weight (BW), feed conversion ratio (FCR), percentage of egg production (EP), egg mass (EM) between treatments (P > 0/05). However, feed intake (FI) of 5% restaurant residual was significantly higher than 20% treatment (P < 0/05). Egg weight (EW) was also higher by receiving 20% restaurant residuals compared with 10% in this respect (P < 0/05). Yolk weight (YW) of treatments containing 10 and 20% of the residual restaurant were significantly higher than control (P < 0/05). Eggs white weight (EWW) of 20 and 5% restaurants residual treatments were significantly increased compared by 10% (P < 0/05). Furthermore, EW, egg weight to shell surface area and egg surface area in 20% treatment were significantly higher than control and 10% treatment (P < 0/05). The overall results of this study have shown that restaurant residuals for laying quail diets in levels of 10 and 15 percent could be replaced with a part of the quail ration without any adverse effect.Keywords: by-product, laying quail, performance, restaurant residuals
Procedia PDF Downloads 166886 Deep Learning for Image Correction in Sparse-View Computed Tomography
Authors: Shubham Gogri, Lucia Florescu
Abstract:
Medical diagnosis and radiotherapy treatment planning using Computed Tomography (CT) rely on the quantitative accuracy and quality of the CT images. At the same time, requirements for CT imaging include reducing the radiation dose exposure to patients and minimizing scanning time. A solution to this is the sparse-view CT technique, based on a reduced number of projection views. This, however, introduces a new problem— the incomplete projection data results in lower quality of the reconstructed images. To tackle this issue, deep learning methods have been applied to enhance the quality of the sparse-view CT images. A first approach involved employing Mir-Net, a dedicated deep neural network designed for image enhancement. This showed promise, utilizing an intricate architecture comprising encoder and decoder networks, along with the incorporation of the Charbonnier Loss. However, this approach was computationally demanding. Subsequently, a specialized Generative Adversarial Network (GAN) architecture, rooted in the Pix2Pix framework, was implemented. This GAN framework involves a U-Net-based Generator and a Discriminator based on Convolutional Neural Networks. To bolster the GAN's performance, both Charbonnier and Wasserstein loss functions were introduced, collectively focusing on capturing minute details while ensuring training stability. The integration of the perceptual loss, calculated based on feature vectors extracted from the VGG16 network pretrained on the ImageNet dataset, further enhanced the network's ability to synthesize relevant images. A series of comprehensive experiments with clinical CT data were conducted, exploring various GAN loss functions, including Wasserstein, Charbonnier, and perceptual loss. The outcomes demonstrated significant image quality improvements, confirmed through pertinent metrics such as Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) between the corrected images and the ground truth. Furthermore, learning curves and qualitative comparisons added evidence of the enhanced image quality and the network's increased stability, while preserving pixel value intensity. The experiments underscored the potential of deep learning frameworks in enhancing the visual interpretation of CT scans, achieving outcomes with SSIM values close to one and PSNR values reaching up to 76.Keywords: generative adversarial networks, sparse view computed tomography, CT image correction, Mir-Net
Procedia PDF Downloads 162885 The Web of Injustice: Untangling Violations of Personality Rights in European International Private Law
Authors: Sara Vora (Hoxha)
Abstract:
Defamation, invasion of privacy, and cyberbullying have all increased in tandem with the growth of the internet. European international private law may struggle to deal with such transgressions if they occur in many jurisdictions. The current study examines how effectively the legal system of European international private law addresses abuses of personality rights in cyberspace. The study starts by discussing how established legal frameworks are being threatened by online personality rights abuses. The article then looks into the rules and regulations of European international private law that are in place to handle overseas lawsuits. This article examines the different elements that courts evaluate when deciding which law to use in a particular case, focusing on the concepts of jurisdiction, choice of law, and recognition and execution of foreign judgements. Next, the research analyses the function of the European Union in preventing and punishing online personality rights abuses. Key pieces of law that control the collecting and processing of personal data on the Internet, including the General Data Protection Regulation (GDPR) and the e-Commerce Directive, are discussed. In addition, this article investigates how the ECtHR handles cases involving the infringement of personal freedoms, including privacy and speech. The article finishes with an assessment of how well the legal framework of European international private law protects individuals' right to privacy online. It draws attention to problems with the present legal structure, such as the inability to enforce international judgements, the inconsistency between national laws, and the necessity for stronger measures to safeguard people' rights online. This paper concludes that while European international private law provides a useful framework for dealing with violations of personality rights online, further harmonisation and stronger enforcement mechanisms are necessary to effectively protect individuals' rights in the digital age.Keywords: European international private law, personality rights, internet, jurisdiction, cross-border disputes, data protection
Procedia PDF Downloads 75884 The Predictive Implication of Executive Function and Language in Theory of Mind Development in Preschool Age Children
Authors: Michael Luc Andre, Célia Maintenant
Abstract:
Theory of mind is a milestone in child development which allows children to understand that others could have different mental states than theirs. Understanding the developmental stages of theory of mind in children leaded researchers on two Connected research problems. In one hand, the link between executive function and theory of mind, and on the other hand, the relationship of theory of mind and syntax processing. These two lines of research involved a great literature, full of important results, despite certain level of disagreement between researchers. For a long time, these two research perspectives continue to grow up separately despite research conclusion suggesting that the three variables should implicate same developmental period. Indeed, our goal was to study the relation between theory of mind, executive function, and language via a unique research question. It supposed that between executive function and language, one of the two variables could play a critical role in the relationship between theory of mind and the other variable. Thus, 112 children aged between three and six years old were recruited for completing a receptive and an expressive vocabulary task, a syntax understanding task, a theory of mind task, and three executive function tasks (inhibition, cognitive flexibility and working memory). The results showed significant correlations between performance on theory of mind task and performance on executive function domain tasks, except for cognitive flexibility task. We also found significant correlations between success on theory of mind task and performance in all language tasks. Multiple regression analysis justified only syntax and general abilities of language as possible predictors of theory of mind performance in our preschool age children sample. The results were discussed in the perspective of a great role of language abilities in theory of mind development. We also discussed possible reasons that could explain the non-significance of executive domains in predicting theory of mind performance, and the meaning of our results for the literature.Keywords: child development, executive function, general language, syntax, theory of mind
Procedia PDF Downloads 64883 Preparation of β-Polyvinylidene Fluoride Film for Self-Charging Lithium-Ion Battery
Authors: Nursultan Turdakyn, Alisher Medeubayev, Didar Meiramov, Zhibek Bekezhankyzy, Desmond Adair, Gulnur Kalimuldina
Abstract:
In recent years the development of sustainable energy sources is getting extensive research interest due to the ever-growing demand for energy. As an alternative energy source to power small electronic devices, ambient energy harvesting from vibration or human body motion is considered a potential candidate. Despite the enormous progress in the field of battery research in terms of safety, lifecycle and energy density in about three decades, it has not reached the level to conveniently power wearable electronic devices such as smartwatches, bands, hearing aids, etc. For this reason, the development of self-charging power units with excellent flexibility and integrated energy harvesting and storage is crucial. Self-powering is a key idea that makes it possible for the system to operate sustainably, which is now getting more acceptance in many fields in the area of sensor networks, the internet of things (IoT) and implantable in-vivo medical devices. For solving this energy harvesting issue, the self-powering nanogenerators (NGS) were proposed and proved their high effectiveness. Usually, sustainable power is delivered through energy harvesting and storage devices by connecting them to the power management circuit; as for energy storage, the Li-ion battery (LIB) is one of the most effective technologies. Through the movement of Li ions under the driving of an externally applied voltage source, the electrochemical reactions generate the anode and cathode, storing the electrical energy as the chemical energy. In this paper, we present a simultaneous process of converting the mechanical energy into chemical energy in a way that NG and LIB are combined as an all-in-one power system. The electrospinning method was used as an initial step for the development of such a system with a β-PVDF separator. The obtained film showed promising voltage output at different stress frequencies. X-ray diffraction (XRD) and Fourier Transform Infrared Spectroscopy (FT-IR) analysis showed a high percentage of β phase of PVDF polymer material. Moreover, it was found that the addition of 1 wt.% of BTO (Barium Titanate) results in higher quality fibers. When comparing pure PVDF solution with 20 wt.% content and the one with BTO added the latter was more viscous. Hence, the sample was electrospun uniformly without any beads. Lastly, to test the sensor application of such film, a particular testing device has been developed. With this device, the force of a finger tap can be applied at different frequencies so that electrical signal generation is validated.Keywords: electrospinning, nanogenerators, piezoelectric PVDF, self-charging li-ion batteries
Procedia PDF Downloads 162882 Changes in Textural Properties of Zucchini Slices Under Effects of Partial Predrying and Deep-Fat-Frying
Authors: E. Karacabey, Ş. G. Özçelik, M. S. Turan, C. Baltacıoğlu, E. Küçüköner
Abstract:
Changes in textural properties of any food material during processing is significant for further consumer’s evaluation and directly affects their decisions. Thus any food material should be considered in terms of textural properties after any process. In the present study zucchini slices were partially predried to control and reduce the product’s final oil content. A conventional oven was used for partially dehydration of zucchini slices. Following frying was carried in an industrial fryer having temperature controller. This study was based on the effect of this predrying process on textural properties of fried zucchini slices. Texture profile analysis was performed. Hardness, elasticity, chewiness, cohesiveness were studied texture parameters of fried zucchini slices. Temperature and weight loss were monitored parameters of predrying process, whereas, in frying, oil temperature and process time were controlled. Optimization of two successive processes was done by response surface methodology being one of the common used statistical process optimization tools. Models developed for each texture parameters displayed high success to predict their values as a function of studied processes’ conditions. Process optimization was performed according to target values for each property determined for directly fried zucchini slices taking the highest score from sensory evaluation. Results indicated that textural properties of predried and then fried zucchini slices could be controlled by well-established equations. This is thought to be significant for fried stuff related food industry, where controlling of sensorial properties are crucial to lead consumer’s perception and texture related ones are leaders. This project (113R015) has been supported by TUBITAK.Keywords: optimization, response surface methodology, texture profile analysis, conventional oven, modelling
Procedia PDF Downloads 433881 Physicochemical Characterization of Asphalt Ridge Froth Bitumen
Authors: Nader Nciri, Suil Song, Namho Kim, Namjun Cho
Abstract:
Properties and compositions of bitumen and bitumen-derived liquids have significant influences on the selection of recovery, upgrading and refining processes. Optimal process conditions can often be directly related to these properties. The end uses of bitumen and bitumen products are thus related to their compositions. Because it is not possible to conduct a complete analysis of the molecular structure of bitumen, characterization must be made in other terms. The present paper focuses on physico-chemical analysis of two different types of bitumens. These bitumen samples were chosen based on: the original crude oil (sand oil and crude petroleum), and mode of process. The aim of this study is to determine both the manufacturing effect on chemical species and the chemical organization as a function of the type of bitumen sample. In order to obtain information on bitumen chemistry, elemental analysis (C, H, N, S, and O), heavy metal (Ni, V) concentrations, IATROSCAN chromatography (thin layer chromatography-flame ionization detection), FTIR spectroscopy, and 1H NMR spectroscopy have all been used. The characterization includes information about the major compound types (saturates, aromatics, resins and asphaltenes) which can be compared with similar data for other bitumens, more importantly, can be correlated with data from petroleum samples for which refining characteristics are known. Examination of Asphalt Ridge froth bitumen showed that it differed significantly from representative petroleum pitches, principally in their nonhydrocarbon content, heavy metal content and aromatic compounds. When possible, properties and composition were related to recovery and refining processes. This information is important because of the effects that composition has on recovery and processing reactions.Keywords: froth bitumen, oil sand, asphalt ridge, petroleum pitch, thin layer chromatography-flame ionization detection, infrared spectroscopy, 1H nuclear magnetic resonance spectroscopy
Procedia PDF Downloads 427880 Influence of Magnetic Field on Microstructure and Properties of Copper-Silver Composites
Authors: Engang Wang
Abstract:
The Cu-alloy composites are a kind of high-strength and high-conductivity Cu-based alloys, which have excellent mechanical and electrical properties and is widely used in electronic, electrical, machinery industrial fields. However, the solidification microstructure of the composites, such as the primary or second dendrite arm spacing, have important rule to its tensile strength and conductivity, and that is affected by its fabricating method. In this paper, two kinds of directional solidification methods; the exothermic powder method (EP method) and liquid metal cooling method (LMC method), were used to fabricate the Cu-alloy composites with applied different magnetic fields to investigate their influence on the solidifying microstructure of Cu-alloy, and further the fabricated Cu-alloy composites was drawn to wires to investigate the influence of fabricating method and magnetic fields on the drawing microstructure of fiber-reinforced Cu-alloy composites and its properties. The experiment of Cu-Ag alloy under directional solidification and horizontal magnetic fields with different processing parameters show that: 1) For the Cu-Ag alloy with EP method, the dendrite is directionally developed in the cooling copper mould and the solidifying microstructure is effectively refined by applying horizontal magnetic fields. 2) For the Cu-Ag alloy with LMC method, the primary dendrite arm spacing is decreased and the content of Ag in the dendrite increases as increasing the drawing velocity of solidification. 3) The dendrite is refined and the content of Ag in the dendrite increases as increasing the magnetic flux intensity; meanwhile, the growth direction of dendrite is also affected by magnetic field. The research results of Cu-Ag alloy in situ composites by drawing deforming process show that the micro-hardness of alloy is higher by decreasing dendrite arm spacing. When the dendrite growth orientation is consistent with the axial of the samples. the conductivity of the composites increases with the second dendrite arm spacing increases. However, its conductivity reduces with the applied magnetic fields owing to disrupting the dendrite growth orientation.Keywords: Cu-Ag composite, magnetic field, microstructure, solidification
Procedia PDF Downloads 214879 Bias Minimization in Construction Project Dispute Resolution
Authors: Keyao Li, Sai On Cheung
Abstract:
Incorporation of alternative dispute resolution (ADR) mechanism has been the main feature of current trend of construction project dispute resolution (CPDR). ADR approaches have been identified as efficient mechanisms and are suitable alternatives to litigation and arbitration. Moreover, the use of ADR in this multi-tiered dispute resolution process often leads to repeated evaluations of a same dispute. Multi-tiered CPDR may become a breeding ground for cognitive biases. When completed knowledge is not available at the early tier of construction dispute resolution, disputing parties may form preconception of the dispute matter or the counterpart. This preconception would influence their information processing in the subsequent tier. Disputing parties tend to search and interpret further information in a self-defensive way to confirm their early positions. Their imbalanced information collection would boost their confidence in the held assessments. Their attitudes would be hardened and difficult to compromise. The occurrence of cognitive bias, therefore, impedes efficient dispute settlement. This study aims to explore ways to minimize bias in CPDR. Based on a comprehensive literature review, three types of bias minimizing approaches were collected: strategy-based, attitude-based and process-based. These approaches were further operationalized into bias minimizing measures. To verify the usefulness and practicability of these bias minimizing measures, semi-structured interviews were conducted with ten CPDR third party neutral professionals. All of the interviewees have at least twenty years of experience in facilitating settlement of construction dispute. The usefulness, as well as the implications of the bias minimizing measures, were validated and suggested by these experts. There are few studies on cognitive bias in construction management in general and in CPDR in particular. This study would be the first of its type to enhance the efficiency of construction dispute resolution by highlighting strategies to minimize the biases therein.Keywords: bias, construction project dispute resolution, minimization, multi-tiered, semi-structured interview
Procedia PDF Downloads 186878 Hybrid Precoder Design Based on Iterative Hard Thresholding Algorithm for Millimeter Wave Multiple-Input-Multiple-Output Systems
Authors: Ameni Mejri, Moufida Hajjaj, Salem Hasnaoui, Ridha Bouallegue
Abstract:
The technology advances have most lately made the millimeter wave (mmWave) communication possible. Due to the huge amount of spectrum that is available in MmWave frequency bands, this promising candidate is considered as a key technology for the deployment of 5G cellular networks. In order to enhance system capacity and achieve spectral efficiency, very large antenna arrays are employed at mmWave systems by exploiting array gain. However, it has been shown that conventional beamforming strategies are not suitable for mmWave hardware implementation. Therefore, new features are required for mmWave cellular applications. Unlike traditional multiple-input-multiple-output (MIMO) systems for which only digital precoders are essential to accomplish precoding, MIMO technology seems to be different at mmWave because of digital precoding limitations. Moreover, precoding implements a greater number of radio frequency (RF) chains supporting more signal mixers and analog-to-digital converters. As RF chain cost and power consumption is increasing, we need to resort to another alternative. Although the hybrid precoding architecture has been regarded as the best solution based on a combination between a baseband precoder and an RF precoder, we still do not get the optimal design of hybrid precoders. According to the mapping strategies from RF chains to the different antenna elements, there are two main categories of hybrid precoding architecture. Given as a hybrid precoding sub-array architecture, the partially-connected structure reduces hardware complexity by using a less number of phase shifters, whereas it sacrifices some beamforming gain. In this paper, we treat the hybrid precoder design in mmWave MIMO systems as a problem of matrix factorization. Thus, we adopt the alternating minimization principle in order to solve the design problem. Further, we present our proposed algorithm for the partially-connected structure, which is based on the iterative hard thresholding method. Through simulation results, we show that our hybrid precoding algorithm provides significant performance gains over existing algorithms. We also show that the proposed approach reduces significantly the computational complexity. Furthermore, valuable design insights are provided when we use the proposed algorithm to make simulation comparisons between the hybrid precoding partially-connected structure and the fully-connected structure.Keywords: alternating minimization, hybrid precoding, iterative hard thresholding, low-complexity, millimeter wave communication, partially-connected structure
Procedia PDF Downloads 321877 Sustainable Crop Mechanization among Small Scale Rural Farmers in Nigeria: The Hurdles
Authors: Charles Iledun Oyewole
Abstract:
The daunting challenge that the ‘man with the hoe’ is going to face in the coming decades will be complex and interwoven. With global population already above 7 billion people, it has been estimated that food (crop) production must more than double by 2050 to meet up with the world’s food requirements. Nigeria population is also expected to reach over 240 million people by 2050, at the current annual population growth of 2.61 per cent. The country’s farming population is estimated at over 65 per cent, but the country still depends on food importation to complement production. The small scale farmer, who depends on simple hand tools: hoes and cutlasses, remains the centre of agricultural production, accounting for 90 per cent of the total agricultural output and 80 per cent of the market flow. While the hoe may have been a tool for sustainable development at a time in human history, this role has been smothered by population growth, which has brought too many mouths to be fed (over 170 million), as well as many industries to fuel with raw materials. It may then be argued that the hoe is unfortunately not a tool for the coming challenges and that agricultural mechanization should be the focus. However, agriculture as an enterprise is a ‘complete wheel’ which does not work when broken, particularly, in respect to mechanization. Generally, mechanization will prompt increase production, where land is readily available; increase production, will require post-harvest handling mechanisms, crop processing and subsequent storage. An important aspect of this is readily available and favourable markets for such produce; fuel by good agricultural policies. A break in this wheel will lead to the process of mechanization crashing back to subsistence production, and probably reversal to the hoe. The focus of any agricultural policy should be to chart a course for sustainable mechanization that is environmentally friendly, that may ameliorate Nigeria’s food and raw material gaps. This is the focal point of this article.Keywords: Crop production, Farmer, Hoes, Mechanization, Policy framework, Population, Growth, Rural areas
Procedia PDF Downloads 222876 Experimenting the Influence of Input Modality on Involvement Load Hypothesis
Authors: Mohammad Hassanzadeh
Abstract:
As far as incidental vocabulary learning is concerned, the basic contention of the Involvement Load Hypothesis (ILH) is that retention of unfamiliar words is, generally, conditional upon the degree of involvement in processing them. This study examined input modality and incidental vocabulary uptake in a task-induced setting whereby three variously loaded task types (marginal glosses, fill-in-task, and sentence-writing) were alternately assigned to one group of students at Allameh Tabataba’i University (n=2l) during six classroom sessions. While one round of exposure was comprised of the audiovisual medium (TV talk shows), the second round consisted of textual materials with approximately similar subject matter (reading texts). In both conditions, however, the tasks were equivalent to one another. Taken together, the study pursued the dual objectives of establishing a litmus test for the ILH and its proposed values of ‘need’, ‘search’ and ‘evaluation’ in the first place. Secondly, it sought to bring to light the superiority issue of exposure to audiovisual input versus the written input as far as the incorporation of tasks is concerned. At the end of each treatment session, a vocabulary active recall test was administered to measure their incidental gains. Running a one-way analysis of variance revealed that the audiovisual intervention yielded higher gains than the written version even when differing tasks were included. Meanwhile, task 'three' (sentence-writing) turned out the most efficient in tapping learners' active recall of the target vocabulary items. In addition to shedding light on the superiority of audiovisual input over the written input when circumstances are relatively held constant, this study for the most part, did support the underlying tenets of ILH.Keywords: Keywords— Evaluation, incidental vocabulary learning, input mode, Involvement Load Hypothesis, need, search.
Procedia PDF Downloads 279875 An Analysis of LoRa Networks for Rainforest Monitoring
Authors: Rafael Castilho Carvalho, Edjair de Souza Mota
Abstract:
As the largest contributor to the biogeochemical functioning of the Earth system, the Amazon Rainforest has the greatest biodiversity on the planet, harboring about 15% of all the world's flora. Recognition and preservation are the focus of research that seeks to mitigate drastic changes, especially anthropic ones, which irreversibly affect this biome. Functional and low-cost monitoring alternatives to reduce these impacts are a priority, such as those using technologies such as Low Power Wide Area Networks (LPWAN). Promising, reliable, secure and with low energy consumption, LPWAN can connect thousands of IoT devices, and in particular, LoRa is considered one of the most successful solutions to facilitate forest monitoring applications. Despite this, the forest environment, in particular the Amazon Rainforest, is a challenge for these technologies, requiring work to identify and validate the use of technology in a real environment. To investigate the feasibility of deploying LPWAN in remote water quality monitoring of rivers in the Amazon Region, a LoRa-based test bed consisting of a Lora transmitter and a LoRa receiver was set up, both parts were implemented with Arduino and the LoRa chip SX1276. The experiment was carried out at the Federal University of Amazonas, which contains one of the largest urban forests in Brazil. There are several springs inside the forest, and the main goal is to collect water quality parameters and transmit the data through the forest in real time to the gateway at the uni. In all, there are nine water quality parameters of interest. Even with a high collection frequency, the amount of information that must be sent to the gateway is small. However, for this application, the battery of the transmitter device is a concern since, in the real application, the device must run without maintenance for long periods of time. With these constraints in mind, parameters such as Spreading Factor (SF) and Coding Rate (CR), different antenna heights, and distances were tuned to better the connectivity quality, measured with RSSI and loss rate. A handheld spectrum analyzer RF Explorer was used to get the RSSI values. Distances exceeding 200 m have soon proven difficult to establish communication due to the dense foliage and high humidity. The optimal combinations of SF-CR values were 8-5 and 9-5, showing the lowest packet loss rates, 5% and 17%, respectively, with a signal strength of approximately -120 dBm, these being the best settings for this study so far. The rains and climate changes imposed limitations on the equipment, and more tests are already being conducted. Subsequently, the range of the LoRa configuration must be extended using a mesh topology, especially because at least three different collection points in the same water body are required.Keywords: IoT, LPWAN, LoRa, coverage, loss rate, forest
Procedia PDF Downloads 89874 Analysis of Epileptic Electroencephalogram Using Detrended Fluctuation and Recurrence Plots
Authors: Mrinalini Ranjan, Sudheesh Chethil
Abstract:
Epilepsy is a common neurological disorder characterised by the recurrence of seizures. Electroencephalogram (EEG) signals are complex biomedical signals which exhibit nonlinear and nonstationary behavior. We use two methods 1) Detrended Fluctuation Analysis (DFA) and 2) Recurrence Plots (RP) to capture this complex behavior of EEG signals. DFA considers fluctuation from local linear trends. Scale invariance of these signals is well captured in the multifractal characterisation using detrended fluctuation analysis (DFA). Analysis of long-range correlations is vital for understanding the dynamics of EEG signals. Correlation properties in the EEG signal are quantified by the calculation of a scaling exponent. We report the existence of two scaling behaviours in the epileptic EEG signals which quantify short and long-range correlations. To illustrate this, we perform DFA on extant ictal (seizure) and interictal (seizure free) datasets of different patients in different channels. We compute the short term and long scaling exponents and report a decrease in short range scaling exponent during seizure as compared to pre-seizure and a subsequent increase during post-seizure period, while the long-term scaling exponent shows an increase during seizure activity. Our calculation of long-term scaling exponent yields a value between 0.5 and 1, thus pointing to power law behaviour of long-range temporal correlations (LRTC). We perform this analysis for multiple channels and report similar behaviour. We find an increase in the long-term scaling exponent during seizure in all channels, which we attribute to an increase in persistent LRTC during seizure. The magnitude of the scaling exponent and its distribution in different channels can help in better identification of areas in brain most affected during seizure activity. The nature of epileptic seizures varies from patient-to-patient. To illustrate this, we report an increase in long-term scaling exponent for some patients which is also complemented by the recurrence plots (RP). RP is a graph that shows the time index of recurrence of a dynamical state. We perform Recurrence Quantitative analysis (RQA) and calculate RQA parameters like diagonal length, entropy, recurrence, determinism, etc. for ictal and interictal datasets. We find that the RQA parameters increase during seizure activity, indicating a transition. We observe that RQA parameters are higher during seizure period as compared to post seizure values, whereas for some patients post seizure values exceeded those during seizure. We attribute this to varying nature of seizure in different patients indicating a different route or mechanism during the transition. Our results can help in better understanding of the characterisation of epileptic EEG signals from a nonlinear analysis.Keywords: detrended fluctuation, epilepsy, long range correlations, recurrence plots
Procedia PDF Downloads 176873 The Multiple Sclerosis condition and the Role of Varicella-zoster virus in its Progression
Authors: Sina Mahdavi, Mahdi Asghari Ozma
Abstract:
Multiple sclerosis (MS) is the most common inflammatory autoimmune disease of the CNS that affects the myelination process in the central nervous system (CNS). Complex interactions of various "environmental or infectious" factors may act as triggers in autoimmunity and disease progression. The association between viral infections, especially human Varicella-zoster virus (VZV) and MS is one potential cause that is not well understood. This study aims to summarize the available data on VZV retrovirus infection in MS disease progression. For this study, the keywords "Multiple sclerosis", " Human Varicella-zoster virus ", and "central nervous system" in the databases PubMed, Google Scholar, Sid, and MagIran between 2016 and 2022 were searched and 14 articles were chosen, studied, and analyzed. Analysis of the amino acid sequences of HNRNPA1 with VZV proteins has shown a 62% amino acid sequence similarity between VZV gE and the PrLD/M9 epitope region (TNPO1 binding domain) of mutant HNRNPA1. A heterogeneous nuclear ribonucleoprotein (hnRNP), which is produced by HNRNPA1, is involved in the processing and transfer of mRNA and pre-mRNA. Mutant HNRNPA1 mimics gE of VZV as an antigen that leads to autoantibody production. Mutant HnRNPA1 translocates to the cytoplasm, after aggregation is presented by MHC class I, followed by CD8 + cells. Of these, antibodies and immune cells against the gE epitopes of VZV remain due to the memory immune response, causing neurodegeneration and the development of MS in genetically predisposed individuals. VZV expression during the course of MS is present in genetically predisposed individuals with HNRNPA1 mutation, suggesting a link between VZV and MS, and that this virus may play a role in the development of MS by inducing an inflammatory state. Therefore, measures to modulate VZV expression may be effective in reducing inflammatory processes in demyelinated areas of MS patients in genetically predisposed individuals.Keywords: multiple sclerosis, varicella-zoster virus, central nervous system, autoimmunity
Procedia PDF Downloads 76872 Optimization of MAG Welding Process Parameters Using Taguchi Design Method on Dead Mild Steel
Authors: Tadele Tesfaw, Ajit Pal Singh, Abebaw Mekonnen Gezahegn
Abstract:
Welding is a basic manufacturing process for making components or assemblies. Recent welding economics research has focused on developing the reliable machinery database to ensure optimum production. Research on welding of materials like steel is still critical and ongoing. Welding input parameters play a very significant role in determining the quality of a weld joint. The metal active gas (MAG) welding parameters are the most important factors affecting the quality, productivity and cost of welding in many industrial operations. The aim of this study is to investigate the optimization process parameters for metal active gas welding for 60x60x5mm dead mild steel plate work-piece using Taguchi method to formulate the statistical experimental design using semi-automatic welding machine. An experimental study was conducted at Bishoftu Automotive Industry, Bishoftu, Ethiopia. This study presents the influence of four welding parameters (control factors) like welding voltage (volt), welding current (ampere), wire speed (m/min.), and gas (CO2) flow rate (lit./min.) with three different levels for variability in the welding hardness. The objective functions have been chosen in relation to parameters of MAG welding i.e., welding hardness in final products. Nine experimental runs based on an L9 orthogonal array Taguchi method were performed. An orthogonal array, signal-to-noise (S/N) ratio and analysis of variance (ANOVA) are employed to investigate the welding characteristics of dead mild steel plate and used in order to obtain optimum levels for every input parameter at 95% confidence level. The optimal parameters setting was found is welding voltage at 22 volts, welding current at 125 ampere, wire speed at 2.15 m/min and gas flow rate at 19 l/min by using the Taguchi experimental design method within the constraints of the production process. Finally, six conformations welding have been carried out to compare the existing values; the predicated values with the experimental values confirm its effectiveness in the analysis of welding hardness (quality) in final products. It is found that welding current has a major influence on the quality of welded joints. Experimental result for optimum setting gave a better hardness of welding condition than initial setting. This study is valuable for different material and thickness variation of welding plate for Ethiopian industries.Keywords: Weld quality, metal active gas welding, dead mild steel plate, orthogonal array, analysis of variance, Taguchi method
Procedia PDF Downloads 481871 Research and Application of Multi-Scale Three Dimensional Plant Modeling
Authors: Weiliang Wen, Xinyu Guo, Ying Zhang, Jianjun Du, Boxiang Xiao
Abstract:
Reconstructing and analyzing three-dimensional (3D) models from situ measured data is important for a number of researches and applications in plant science, including plant phenotyping, functional-structural plant modeling (FSPM), plant germplasm resources protection, agricultural technology popularization. It has many scales like cell, tissue, organ, plant and canopy from micro to macroscopic. The techniques currently used for data capture, feature analysis, and 3D reconstruction are quite different of different scales. In this context, morphological data acquisition, 3D analysis and modeling of plants on different scales are introduced systematically. The commonly used data capture equipment for these multiscale is introduced. Then hot issues and difficulties of different scales are described respectively. Some examples are also given, such as Micron-scale phenotyping quantification and 3D microstructure reconstruction of vascular bundles within maize stalks based on micro-CT scanning, 3D reconstruction of leaf surfaces and feature extraction from point cloud acquired by using 3D handheld scanner, plant modeling by combining parameter driven 3D organ templates. Several application examples by using the 3D models and analysis results of plants are also introduced. A 3D maize canopy was constructed, and light distribution was simulated within the canopy, which was used for the designation of ideal plant type. A grape tree model was constructed from 3D digital and point cloud data, which was used for the production of science content of 11th international conference on grapevine breeding and genetics. By using the tissue models of plants, a Google glass was used to look around visually inside the plant to understand the internal structure of plants. With the development of information technology, 3D data acquisition, and data processing techniques will play a greater role in plant science.Keywords: plant, three dimensional modeling, multi-scale, plant phenotyping, three dimensional data acquisition
Procedia PDF Downloads 277870 Monitoring Deforestation Using Remote Sensing And GIS
Authors: Tejaswi Agarwal, Amritansh Agarwal
Abstract:
Forest ecosystem plays very important role in the global carbon cycle. It stores about 80% of all above ground and 40% of all below ground terrestrial organic carbon. There is much interest in the extent of tropical forests and their rates of deforestation for two reasons: greenhouse gas contributions and the impact of profoundly negative biodiversity. Deforestation has many ecological, social and economic consequences, one of which is the loss of biological diversity. The rapid deployment of remote sensing (RS) satellites and development of RS analysis techniques in the past three decades have provided a reliable, effective, and practical way to characterize terrestrial ecosystem properties. Global estimates of tropical deforestation vary widely and range from 50,000 to 170,000km2 /yr Recent FAO tropical deforestation estimates for 1990–1995 cite 116,756km2 / yr globally. Remote Sensing can prove to be a very useful tool in monitoring of forests and associated deforestation to a sufficient level of accuracy without the need of physically surveying the forest areas as many of them are physically inaccessible. The methodology for the assessment of forest cover using digital image processing (ERDAS) has been followed. The satellite data for the study was procured from Indian institute of remote Sensing (IIRS), Dehradoon in the digital format. While procuring the satellite data, care was taken to ensure that the data was cloud free and did not belong to dry and leafless season. The Normalized Difference Vegetation Index (NDVI) has been used as a numerical indicator of the reduction in ground biomass. NDVI = (near I.R - Red)/ (near I.R + Red). After calculating the NDVI variations and associated mean, we have analysed the change in ground biomass. Through this paper, we have tried to indicate the rate of deforestation over a given period of time by comparing the forest cover at different time intervals. With the help of remote sensing and GIS techniques, it is clearly shown that the total forest cover is continuously degrading and transforming into various land use/land cover category.Keywords: remote sensing, deforestation, supervised classification, NDVI, change detection
Procedia PDF Downloads 1204869 3D-Printing of Waveguide Terminations: Effect of Material Shape and Structuring on Their Characteristics
Authors: Lana Damaj, Vincent Laur, Azar Maalouf, Alexis Chevalier
Abstract:
Matched termination is an important part of the passive waveguide components. It is typically used at the end of a waveguide transmission line to prevent reflections and improve signal quality. Waveguide terminations (loads) are commonly used in microwave and RF applications. In traditional microwave architectures, usually, waveguide termination consists of a standard rectangular waveguide made by a lossy resistive material, and ended by shorting metallic plate. These types of terminations are used, to dissipate the energy as heat. However, these terminations may increase the size and the weight of the overall system. New alternative solution consists in developing terminations based on 3D-printing of materials. Designing such terminations is very challenging since it should meet the requirements imposed by the system. These requirements include many parameters such as the absorption, the power handling capability in addition to the cost, the size and the weight that have to be minimized. 3D-printing is a shaping process that enables the production of complex geometries. It allows to find best compromise between requirements. In this paper, a comparison study has been made between different existing and new shapes of waveguide terminations. Indeed, 3D printing of absorbers makes it possible to study not only standard shapes (wedge, pyramid, tongue) but also more complex topologies such as exponential ones. These shapes have been designed and simulated using CST MWS®. The loads have been printed using the carbon-filled PolyLactic Acid, conductive PLA from ProtoPasta. Since the terminations has been characterized in the X-band (from 8GHz to 12GHz), the rectangular waveguide standard WR-90 has been selected. The classical wedge shape has been used as a reference. First, all loads have been simulated with the same length and two parameters have been compared: the absorption level (level of |S11|) and the dissipated power density. This study shows that the concave exponential pyramidal shape has the better absorption level and the convex exponential pyramidal shape has the better dissipated power density level. These two loads have been printed in order to measure their properties. A good agreement between the simulated and measured reflection coefficient has been obtained. Furthermore, a study of material structuring based on the honeycomb hexagonal structure has been investigated in order to vary the effective properties. In the final paper, the detailed methodology and the simulated and measured results will be presented in order to show how 3D-printing can allow controlling mass, weight, absorption level and power behaviour.Keywords: additive manufacturing, electromagnetic composite materials, microwave measurements, passive components, power handling capacity (PHC), 3D-printing
Procedia PDF Downloads 21