Search results for: alternating magnetic field
6308 Genetics, Law and Society: Regulating New Genetic Technologies
Authors: Aisling De Paor
Abstract:
Scientific and technological developments are driving genetics and genetic technologies into the public sphere. Scientists are making genetic discoveries as to the make up of the human body and the cause and effect of disease, diversity and disability amongst individuals. Technological innovation in the field of genetics is also advancing, with the development of genetic testing, and other emerging genetic technologies, including gene editing (which offers the potential for genetic modification). In addition to the benefits for medicine, health care and humanity, these genetic advances raise a range of ethical, legal and societal concerns. From an ethical perspective, such advances may, for example, change the concept of humans and what it means to be human. Science may take over in conceptualising human beings, which may push the boundaries of existing human rights. New genetic technologies, particularly gene editing techniques create the potential to stigmatise disability, by highlighting disability or genetic difference as something that should be eliminated or anticipated. From a disability perspective, use (and misuse) of genetic technologies raise concerns about discrimination and violations to the dignity and integrity of the individual. With an acknowledgement of the likely future orientation of genetic science, and in consideration of the intersection of genetics and disability, this paper highlights the main concerns raised as genetic science and technology advances (particularly with gene editing developments), and the consequences for disability and human rights. Through the use of traditional doctrinal legal methodologies, it investigates the use (and potential misuse) of gene editing as creating the potential for a unique form of discrimination and stigmatization to develop, as well as a potential gateway to a form of new, subtle eugenics. This article highlights the need to maintain caution as to the use, application and the consequences of genetic technologies. With a focus on the law and policy position in Europe, it examines the need to control and regulate these new technologies, particularly gene editing. In addition to considering the need for regulation, this paper highlights non-normative approaches to address this area, including awareness raising and education, public discussion and engagement with key stakeholders in the field and the development of a multifaceted genetics advisory network.Keywords: disability, gene-editing, genetics, law, regulation
Procedia PDF Downloads 3606307 Vascular Crossed Aphasia in Dextrals: A Study on Bengali-Speaking Population in Eastern India
Authors: Durjoy Lahiri, Vishal Madhukar Sawale, Ashwani Bhat, Souvik Dubey, Gautam Das, Biman Kanti Roy, Suparna Chatterjee, Goutam Gangopadhyay
Abstract:
Crossed aphasia has been an area of considerable interest for cognitive researchers as it offers a fascinating insight into cerebral lateralization for language function. We conducted an observational study in the stroke unit of a tertiary care neurology teaching hospital in eastern India on subjects with crossed aphasia over a period of four years. During the study period, we detected twelve cases of crossed aphasia in strongly right-handed patients, caused by ischemic stroke. The age, gender, vernacular language and educational status of the patients were noted. Aphasia type and severity were assessed using Bengali version of Western Aphasia Battery (validated). Computed tomography, magnetic resonance imaging and angiography were used to evaluate the location and extent of the ischemic lesion in brain. Our series of 12 cases of crossed aphasia included 7 male and 5 female with mean age being 58.6 years. Eight patients were found to have Broca’s aphasia, 3 had trans-cortical motor aphasia and 1 patient suffered from global aphasia. Nine patients were having very severe aphasia and 3 suffered from mild aphasia. Mirror-image type of crossed aphasia was found in 3 patients, whereas 9 had anomalous variety. In our study crossed aphasia was found to be more frequent in males. Anomalous pattern was more common than mirror-image. Majority of the patients had motor-type aphasia and no patient was found to have pure comprehension deficit. We hypothesize that in Bengali-speaking right-handed population, lexical-semantic system of the language network remains loyal to the left hemisphere even if the phonological output system is anomalously located in the right hemisphere.Keywords: aphasia, crossed, lateralization, language function, vascular
Procedia PDF Downloads 1926306 The State of Urban Neighbourhood Research
Authors: Gideon Baffoe
Abstract:
The concept of neighbourhood remains highly relevant in urban studies. However, until now, no attempt has been made to statistically chart the field. This study aims to provide a macroscopic overview using bibliometric analysis of the main characteristics of neighbourhood research in order to understand the academic landscape. The study analyses the emergence and evolution of the concept of neighbourhood in published research, conceptual and intellectual structures as well as scholarship collaboration. It is found that topics related to the local economy of neighbourhoods are sparse, suggesting a major gap in the literature.Keywords: neighbourhood, global south, bibliometric analysis, scholarship
Procedia PDF Downloads 1366305 Micro-Droplet Formation in a Microchannel under the Effect of an Electric Field: Experiment
Authors: Sercan Altundemir, Pinar Eribol, A. Kerem Uguz
Abstract:
Microfluidics systems allow many-large scale laboratory applications to be miniaturized on a single device in order to reduce cost and advance fluid control. Moreover, such systems enable to generate and control droplets which have a significant role on improved analysis for many chemical and biological applications. For example, they can be employed as the model for cells in microfluidic systems. In this work, the interfacial instability of two immiscible Newtonian liquids flowing in a microchannel is investigated. When two immiscible liquids are in laminar regime, a flat interface is formed between them. If a direct current electric field is applied, the interface may deform, i.e. may become unstable and it may be ruptured and form micro-droplets. First, the effect of thickness ratio, total flow rate, viscosity ratio of the silicone oil and ethylene glycol liquid couple on the critical voltage at which the interface starts to destabilize is investigated. Then the droplet sizes are measured under the effect of these parameters at various voltages. Moreover, the effect of total flow rate on the time elapsed for the interface to be ruptured to form droplets by hitting the wall of the channel is analyzed. It is observed that an increase in the viscosity or the thickness ratio of the silicone oil to the ethylene glycol has a stabilizing effect, i.e. a higher voltage is needed while the total flow rate has no effect on it. However, it is observed that an increase in the total flow rate results in shortening of the elapsed time for the interface to hit the wall. Moreover, the droplet size decreases down to 0.1 μL with an increase in the applied voltage, the viscosity ratio or the total flow rate or a decrease in the thickness ratio. In addition to these observations, two empirical models for determining the critical electric number, i.e., the dimensionless voltage and the droplet size and another model which is a combination of both models, for determining the droplet size at the critical voltage are established.Keywords: droplet formation, electrohydrodynamics, microfluidics, two-phase flow
Procedia PDF Downloads 1766304 Assessing Overall Thermal Conductance Value of Low-Rise Residential Home Exterior Above-Grade Walls Using Infrared Thermography Methods
Authors: Matthew D. Baffa
Abstract:
Infrared thermography is a non-destructive test method used to estimate surface temperatures based on the amount of electromagnetic energy radiated by building envelope components. These surface temperatures are indicators of various qualitative building envelope deficiencies such as locations and extent of heat loss, thermal bridging, damaged or missing thermal insulation, air leakage, and moisture presence in roof, floor, and wall assemblies. Although infrared thermography is commonly used for qualitative deficiency detection in buildings, this study assesses its use as a quantitative method to estimate the overall thermal conductance value (U-value) of the exterior above-grade walls of a study home. The overall U-value of exterior above-grade walls in a home provides useful insight into the energy consumption and thermal comfort of a home. Three methodologies from the literature were employed to estimate the overall U-value by equating conductive heat loss through the exterior above-grade walls to the sum of convective and radiant heat losses of the walls. Outdoor infrared thermography field measurements of the exterior above-grade wall surface and reflective temperatures and emissivity values for various components of the exterior above-grade wall assemblies were carried out during winter months at the study home using a basic thermal imager device. The overall U-values estimated from each methodology from the literature using the recorded field measurements were compared to the nominal exterior above-grade wall overall U-value calculated from materials and dimensions detailed in architectural drawings of the study home. The nominal overall U-value was validated through calendarization and weather normalization of utility bills for the study home as well as various estimated heat loss quantities from a HOT2000 computer model of the study home and other methods. Under ideal environmental conditions, the estimated overall U-values deviated from the nominal overall U-value between ±2% to ±33%. This study suggests infrared thermography can estimate the overall U-value of exterior above-grade walls in low-rise residential homes with a fair amount of accuracy.Keywords: emissivity, heat loss, infrared thermography, thermal conductance
Procedia PDF Downloads 3136303 European Standardization in Nanotechnologies and Relation with International Work: The Standardization Can Help Industry and Regulators in Developing Safe Products
Authors: Patrice Conner
Abstract:
Nanotechnologies have enormous potential to contribute to human flourishing in responsible and sustainable ways. They are rapidly developing field of science, technology and innovation. As enabling technologies, their full scope of applications is potentially very wide. Major implications are expected in many areas, e.g. healthcare, information and communication technologies, energy production and storage, materials science/chemical engineering, manufacturing, environmental protection, consumer products, etc. However, nanotechnologies are unlikely to realize their full potential unless their associated societal and ethical issues are adequately attended. Namely nanotechnologies and nanoparticles may expose humans and the environment to new health risks, possibly involving quite different mechanisms of interference with the physiology of human and environmental species. One of the building blocks of the ‘safe, integrated and responsible’ approach is standardization. Both the Economic and Social Committee and the European Parliament have highlighted the importance to be attached to standardization as a means to accompany the introduction on the market of nanotechnologies and nanomaterials, and a means to facilitate the implementation of regulation. ISO and CEN have respectively started in 2005 and 2006 to deal with selected topics related to this emerging and enabling technology. In the beginning of 2010, EC DG ‘Enterprise and Industry’ addressed the mandate M/461 to CEN, CENELEC and ETSI for standardization activities regarding nanotechnologies and nanomaterials. Thus CEN/TC 352 ‘Nanotechnologies’ has been asked to take the leadership for the coordination in the execution of M/461 (46 topics to be standardized) and to contact relevant European and International Technical committees and interested stakeholders as appropriate (56 structures have been identified). Prior requests from M/461 deal with characterization and exposure of nanomaterials and any matters related to Health, Safety and Environment. Answers will be given to: - What are the structures and how they work? - Where are we right now and how work is going from now onwards? - How CEN’s work and targets deal with and interact with global matters in this field?Keywords: characterization, environmental protection, exposure, health risks, nanotechnologies, responsible and sustainable ways, safety
Procedia PDF Downloads 1886302 Synthesis and Optimization of Bio Metal-Organic Framework with Permanent Porosity
Authors: Tia Kristian Tajnšek, Matjaž Mazaj, Nataša Zabukovec Logar
Abstract:
Metal-organic frameworks (MOFs) with their specific properties and the possibility of tuning the structure represent excellent candidates for use in the biomedical field. Their advantage lies in large pore surfaces and volumes, as well as the possibility of using bio-friendly or bioactive constituents. So-called bioMOFs are representatives of MOFs, which are constructed from at least one biomolecule (metal, a small bioactive molecule in metal clusters and/or linker) and are intended for bio-application (usually in the field of medicine; most commonly drug delivery). When designing a bioMOF for biomedical applications, we should adhere to some guidelines for an improved toxicological profile of the material. Such as (i) choosing an endogenous/nontoxic metal, (ii) GRAS (generally recognized as safe) linker, and (iii) nontoxic solvents. Design and synthesis of bioNICS-1 (bioMOF of National Institute of Chemistry Slovenia – 1) consider all these guidelines. Zinc (Zn) was chosen as an endogenous metal with an agreeable recommended daily intake (RDI) and LD50 value, and ascorbic acid (Vitamin C) was chosen as a GRAS and active linker. With these building blocks, we have synthesized a bioNICS-1 material. The synthesis was done in ethanol using a solvothermal method. The synthesis protocol was further optimized in three separate ways. Optimization of (i) synthesis parameters to improve the yield of the synthesis, (ii) input reactant ratio and addition of specific modulators for production of larger crystals, and (iii) differing of the heating source (conventional, microwave and ultrasound) to produce nano-crystals. With optimization strategies, the synthesis yield was increased. Larger crystals were prepared for structural analysis with the use of a proper species and amount of modulator. Synthesis protocol was adjusted to different heating sources, resulting in the production of nano-crystals of bioNICS-1 material. BioNICS-1 was further activated in ethanol and structurally characterized, resolving the crystal structure of new material.Keywords: ascorbic acid, bioMOF, MOF, optimization, synthesis, zinc ascorbate
Procedia PDF Downloads 1416301 A Readiness Framework for Digital Innovation in Education: The Context of Academics and Policymakers in Higher Institutions of Learning to Assess the Preparedness of Their Institutions to Adopt and Incorporate Digital Innovation
Authors: Lufungula Osembe
Abstract:
The field of education has witnessed advances in technology and digital transformation. The methods of teaching have undergone significant changes in recent years, resulting in effects on various areas such as pedagogies, curriculum design, personalized teaching, gamification, data analytics, cloud-based learning applications, artificial intelligence tools, advanced plug-ins in LMS, and the emergence of multimedia creation and design. The field of education has not been immune to the changes brought about by digital innovation in recent years, similar to other fields such as engineering, health, science, and technology. There is a need to look at the variables/elements that digital innovation brings to education and develop a framework for higher institutions of learning to assess their readiness to create a viable environment for digital innovation to be successfully adopted. Given the potential benefits of digital innovation in education, it is essential to develop a framework that can assist academics and policymakers in higher institutions of learning to evaluate the effectiveness of adopting and adapting to the evolving landscape of digital innovation in education. The primary research question addressed in this study is to establish the preparedness of higher institutions of learning to adopt and adapt to the evolving landscape of digital innovation. This study follows a Design Science Research (DSR) paradigm to develop a framework for academics and policymakers in higher institutions of learning to evaluate the readiness of their institutions to adopt digital innovation in education. The Design Science Research paradigm is proposed to aid in developing a readiness framework for digital innovation in education. This study intends to follow the Design Science Research (DSR) methodology, which includes problem awareness, suggestion, development, evaluation, and conclusion. One of the major contributions of this study will be the development of the framework for digital innovation in education. Given the various opportunities offered by digital innovation in recent years, the need to create a readiness framework for digital innovation will play a crucial role in guiding academics and policymakers in their quest to align with emerging technologies facilitated by digital innovation in education.Keywords: digital innovation, DSR, education, opportunities, research
Procedia PDF Downloads 696300 Prediction Model of Body Mass Index of Young Adult Students of Public Health Faculty of University of Indonesia
Authors: Yuwaratu Syafira, Wahyu K. Y. Putra, Kusharisupeni Djokosujono
Abstract:
Background/Objective: Body Mass Index (BMI) serves various purposes, including measuring the prevalence of obesity in a population, and also in formulating a patient’s diet at a hospital, and can be calculated with the equation = body weight (kg)/body height (m)². However, the BMI of an individual with difficulties in carrying their weight or standing up straight can not necessarily be measured. The aim of this study was to form a prediction model for the BMI of young adult students of Public Health Faculty of University of Indonesia. Subject/Method: This study used a cross sectional design, with a total sample of 132 respondents, consisted of 58 males and 74 females aged 21- 30. The dependent variable of this study was BMI, and the independent variables consisted of sex and anthropometric measurements, which included ulna length, arm length, tibia length, knee height, mid-upper arm circumference, and calf circumference. Anthropometric information was measured and recorded in a single sitting. Simple and multiple linear regression analysis were used to create the prediction equation for BMI. Results: The male respondents had an average BMI of 24.63 kg/m² and the female respondents had an average of 22.52 kg/m². A total of 17 variables were analysed for its correlation with BMI. Bivariate analysis showed the variable with the strongest correlation with BMI was Mid-Upper Arm Circumference/√Ulna Length (MUAC/√UL) (r = 0.926 for males and r = 0.886 for females). Furthermore, MUAC alone also has a very strong correlation with BMI (r = 0,913 for males and r = 0,877 for females). Prediction models formed from either MUAC/√UL or MUAC alone both produce highly accurate predictions of BMI. However, measuring MUAC/√UL is considered inconvenient, which may cause difficulties when applied on the field. Conclusion: The prediction model considered most ideal to estimate BMI is: Male BMI (kg/m²) = 1.109(MUAC (cm)) – 9.202 and Female BMI (kg/m²) = 0.236 + 0.825(MUAC (cm)), based on its high accuracy levels and the convenience of measuring MUAC on the field.Keywords: body mass index, mid-upper arm circumference, prediction model, ulna length
Procedia PDF Downloads 2146299 Light-Entropy Continuum Theory
Authors: Christopher Restall
Abstract:
field causing attraction between mixed charges of matter during charge exchanges with antimatter. This asymmetry is caused from none-trinary quark amount variation in matter and anti-matter during entropy progression. This document explains how a circularity critique exercise assessed scientific knowledge and develop a unified theory from the information collected. The circularity critique, creates greater intuition leaps than an individual would naturally, the information collected can be integrated and assessed thoroughly for correctness.Keywords: unified theory of everything, gravity, quantum gravity, standard model
Procedia PDF Downloads 426298 Specification and Unification of All Fundamental Forces Exist in Universe in the Theoretical Perspective – The Universal Mechanics
Authors: Surendra Mund
Abstract:
At the beginning, the physical entity force was defined mathematically by Sir Isaac Newton in his Principia Mathematica as F ⃗=(dp ⃗)/dt in form of his second law of motion. Newton also defines his Universal law of Gravitational force exist in same outstanding book, but at the end of 20th century and beginning of 21st century, we have tried a lot to specify and unify four or five Fundamental forces or Interaction exist in universe, but we failed every time. Usually, Gravity creates problems in this unification every single time, but in my previous papers and presentations, I defined and derived Field and force equations for Gravitational like Interactions for each and every kind of central systems. This force is named as Variational Force by me, and this force is generated by variation in the scalar field density around the body. In this particular paper, at first, I am specifying which type of Interactions are Fundamental in Universal sense (or in all type of central systems or bodies predicted by my N-time Inflationary Model of Universe) and then unify them in Universal framework (defined and derived by me as Universal Mechanics in a separate paper) as well. This will also be valid in Universal dynamical sense which includes inflations and deflations of universe, central system relativity, Universal relativity, ϕ-ψ transformation and transformation of spin, physical perception principle, Generalized Fundamental Dynamical Law and many other important Generalized Principles of Generalized Quantum Mechanics (GQM) and Central System Theory (CST). So, In this article, at first, I am Generalizing some Fundamental Principles, and then Unifying Variational Forces (General form of Gravitation like Interactions) and Flow Generated Force (General form of EM like Interactions), and then Unify all Fundamental Forces by specifying Weak and Strong Interactions in form of more basic terms - Variational, Flow Generated and Transformational Interactions.Keywords: Central System Force, Disturbance Force, Flow Generated Forces, Generalized Nuclear Force, Generalized Weak Interactions, Generalized EM-Like Interactions, Imbalance Force, Spin Generated Forces, Transformation Generated Force, Unified Force, Universal Mechanics, Uniform And Non-Uniform Variational Interactions, Variational Interactions
Procedia PDF Downloads 506297 Investigating Software Engineering Challenges in Game Development
Authors: Fawad Zaidi
Abstract:
This paper discusses a variety of challenges and solutions involved with creating computer games and the issues faced by the software engineers working in this field. This review further investigates the articles coverage of project scope and the problem of feature creep that appears to be inherent with game development. The paper tries to answer the following question: Is this a problem caused by a shortage, or bad software engineering practices, or is this outside the control of the software engineering component of the game production process?Keywords: software engineering, computer games, software applications, development
Procedia PDF Downloads 4756296 Engineers’ Ability to Lead Effectively the Transformation to Sustainable Manufacturing: A Case Study of Saudi Arabia
Authors: Mohammed Alharbi, Clare Wood, Vasileios Samaras
Abstract:
Sustainability leadership is a controversial topic, particularly in the engineering context. The theoretical and practical technical focus of the engineering profession impacts our lives. Technologically, engineers significantly contribute to our modern civilization. Industrial revolutions are among the top engineering accomplishments that have contributed to the flourishing of our life. However, engineers have not always received the credit they deserve; instead, they have been blamed for the advent of various global issues, among them the global warming phenomena that are believed to be a result of the industrial revolutions. Global challenges demand engineers demonstrate more than their technical skills for effective contribution to a sustainable future. As a result, engineering leadership has emerged as a new research field. Sustainable manufacturing is a cornerstone for sustainable development. Investigating the change to more sustainable manufacturing practices is a significant issue for all, and even more in the field of engineering leadership. Engineers dominate the manufacturing industry; however, one of the main criticism of engineers is the lack of leadership skills. The literature on engineering leadership has not highlighted enough the engineers' leadership ability in leading sustainable manufacturing. Since we are at the cusp of a new industrial revolution -Industry 4.0, it is vital to investigate the ability of engineers to lead the industry towards a sustainable future. The primary purpose of this paper is to evaluate engineers' sustainability leadership competencies utilizing The Cambridge University Behavioral Competency Model. However, the practical application of the Cambridge model is limited due to the absence of a reliable measurement tool. Therefore, this study developed a valid and reliable survey instrument tool compatible with the Cambridge model as a secondary objective. More than 300 Saudi engineers from the manufacturing industry responded to an online questionnaire collected through the Qualtrics platform and analyzed using SPSS software. The findings provide a contemporary understanding of engineers' mindset related to sustainability leadership. The output of this research study could be valuable in designing effective engineering leadership programs in academia or industry, particularly for enhancing a sustainable manufacturing environment.Keywords: engineer, leadership, manufacturing, sustainability
Procedia PDF Downloads 1586295 Embolism: How Changes in Xylem Sap Surface Tension Affect the Resistance against Hydraulic Failure
Authors: Adriano Losso, Birgit Dämon, Stefan Mayr
Abstract:
In vascular plants, water flows from roots to leaves in a metastable state, and even a small perturbation of the system can lead a sudden transition from the liquid to the vapor phase, resulting in xylem embolism (cavitation). Xylem embolism, induced by drought stress and/or freezing stress is caused by the aspiration of gaseous bubbles into xylem conduits from adjacent gas-filled compartments through pit membrane pores (‘air seeding’). At water potentials less negative than the threshold for air seeding, the surface tension (γ) stabilizes the air-water interface and thus prevents air from passing the pit pores. This hold is probably also true for conifers, where this effect occurs at the edge of the sealed torus. Accordingly, it was experimentally demonstrated that γ influences air seeding, but information on the relevance of this effect under field conditions is missing. In this study, we analyzed seasonal changes in γ of the xylem sap in two conifers growing at the alpine timberline (Picea abies and Pinus mugo). In addition, cut branches were perfused (40 min perfusion at 0.004 MPa) with different γ solutions (i.e. distilled and degassed water, 2, 5 and 15% (v/v) ethanol-water solution corresponding to a γ of 74, 65, 55 and 45 mN m-1, respectively) and their vulnerability to drought-induced embolism analyzed via the centrifuge technique (Cavitron). In both species, xylem sap γ changed considerably (ca. 53-67 and ca. 50-68 mN m-1 in P. abies and P. cembra, respectively) over the season. Branches perfused with low γ solutions showed reduced resistance against drought-induced embolism in both species. A significant linear relationship (P < 0.001) between P12, P50 and P88 (i.e. water potential at 12, 50 and 88% of the loss of conductivity) and xylem sap γ was found. Based on this correlation, a variation in P50 between -3.10 and -3.83 MPa (P. abies) and between -3.21 and -4.11 MPa (P. mugo) over the season could be estimated. Results demonstrate that changes in γ of the xylem sap can considerably influence a tree´s resistance to drought-induced embolism. They indicate that vulnerability analyses, normally conducted at a γ near that of pure water, might often underestimate vulnerabilities under field conditions. For studied timberline conifers, seasonal changes in γ might be especially relevant in winter, when frost drought and freezing stress can lead to an excessive embolism.Keywords: conifers, Picea abies, Pinus mugo, timberline
Procedia PDF Downloads 2946294 An Experimental Investigation on Explosive Phase Change of Liquefied Propane During a Bleve Event
Authors: Frederic Heymes, Michael Albrecht Birk, Roland Eyssette
Abstract:
Boiling Liquid Expanding Vapor Explosion (BLEVE) has been a well know industrial accident for over 6 decades now, and yet it is still poorly predicted and avoided. BLEVE is created when a vessel containing a pressure liquefied gas (PLG) is engulfed in a fire until the tank rupture. At this time, the pressure drops suddenly, leading the liquid to be in a superheated state. The vapor expansion and the violent boiling of the liquid produce several shock waves. This works aimed at understanding the contribution of vapor ad liquid phases in the overpressure generation in the near field. An experimental work was undertaken at a small scale to reproduce realistic BLEVE explosions. Key parameters were controlled through the experiments, such as failure pressure, fluid mass in the vessel, and weakened length of the vessel. Thirty-four propane BLEVEs were then performed to collect data on scenarios similar to common industrial cases. The aerial overpressure was recorded all around the vessel, and also the internal pressure changed during the explosion and ground loading under the vessel. Several high-speed cameras were used to see the vessel explosion and the blast creation by shadowgraph. Results highlight how the pressure field is anisotropic around the cylindrical vessel and highlights a strong dependency between vapor content and maximum overpressure from the lead shock. The time chronology of events reveals that the vapor phase is the main contributor to the aerial overpressure peak. A prediction model is built upon this assumption. Secondary flow patterns are observed after the lead. A theory on how the second shock observed in experiments forms is exposed thanks to an analogy with numerical simulation. The phase change dynamics are also discussed thanks to a window in the vessel. Ground loading measurements are finally presented and discussed to give insight into the order of magnitude of the force.Keywords: phase change, superheated state, explosion, vapor expansion, blast, shock wave, pressure liquefied gas
Procedia PDF Downloads 776293 Computation of Radiotherapy Treatment Plans Based on CT to ED Conversion Curves
Authors: B. Petrović, L. Rutonjski, M. Baucal, M. Teodorović, O. Čudić, B. Basarić
Abstract:
Radiotherapy treatment planning computers use CT data of the patient. For the computation of a treatment plan, treatment planning system must have an information on electron densities of tissues scanned by CT. This information is given by the conversion curve CT (CT number) to ED (electron density), or simply calibration curve. Every treatment planning system (TPS) has built in default CT to ED conversion curves, for the CTs of different manufacturers. However, it is always recommended to verify the CT to ED conversion curve before actual clinical use. Objective of this study was to check how the default curve already provided matches the curve actually measured on a specific CT, and how much it influences the calculation of a treatment planning computer. The examined CT scanners were from the same manufacturer, but four different scanners from three generations. The measurements of all calibration curves were done with the dedicated phantom CIRS 062M Electron Density Phantom. The phantom was scanned, and according to real HU values read at the CT console computer, CT to ED conversion curves were generated for different materials, for same tube voltage 140 kV. Another phantom, CIRS Thorax 002 LFC which represents an average human torso in proportion, density and two-dimensional structure, was used for verification. The treatment planning was done on CT slices of scanned CIRS LFC 002 phantom, for selected cases. Interest points were set in the lungs, and in the spinal cord, and doses recorded in TPS. The overall calculated treatment times for four scanners and default scanner did not differ more than 0.8%. Overall interest point dose in bone differed max 0.6% while for single fields was maximum 2.7% (lateral field). Overall interest point dose in lungs differed max 1.1% while for single fields was maximum 2.6% (lateral field). It is known that user should verify the CT to ED conversion curve, but often, developing countries are facing lack of QA equipment, and often use default data provided. We have concluded that the CT to ED curves obtained differ in certain points of a curve, generally in the region of higher densities. This influences the treatment planning result which is not significant, but definitely does make difference in the calculated dose.Keywords: Computation of treatment plan, conversion curve, radiotherapy, electron density
Procedia PDF Downloads 4866292 Connotation Reform and Problem Response of Rural Social Relations under the Influence of the Earthquake: With a Review of Wenchuan Decade
Abstract:
The occurrence of Wenchuan earthquake in 2008 has led to severe damage to the rural areas of Chengdu city, such as the rupture of the social network, the stagnation of economic production and the rupture of living space. The post-disaster reconstruction has become a sustainable issue. As an important link to maintain the order of rural social development, social network should be an important content of post-disaster reconstruction. Therefore, this paper takes rural reconstruction communities in earthquake-stricken areas of Chengdu as the research object and adopts sociological research methods such as field survey, observation and interview to try to understand the transformation of rural social relations network under the influence of earthquake and its impact on rural space. It has found that rural societies under the earthquake generally experienced three phases: the break of stable social relations, the transition of temporary non-normal state, and the reorganization of social networks. The connotation of phased rural social relations also changed accordingly: turn to a new division of labor on the social orientation, turn to a capital flow and redistribution in new production mode on the capital orientation, and turn to relative decentralization after concentration on the spatial dimension. Along with such changes, rural areas have emerged some social issues such as the alienation of competition in the new industry division, the low social connection, the significant redistribution of capital, and the lack of public space. Based on a comprehensive review of these issues, this paper proposes the corresponding response mechanism. First of all, a reasonable division of labor should be established within the villages to realize diversified commodity supply. Secondly, the villages should adjust the industrial type to promote the equitable participation of capital allocation groups. Finally, external public spaces should be added to strengthen the field of social interaction within the communities.Keywords: social relations, social support networks, industrial division, capital allocation, public space
Procedia PDF Downloads 1566291 Numerical Study of Homogeneous Nanodroplet Growth
Authors: S. B. Q. Tran
Abstract:
Drop condensation is the phenomenon that the tiny drops form when the oversaturated vapour present in the environment condenses on a substrate and makes the droplet growth. Recently, this subject has received much attention due to its applications in many fields such as thin film growth, heat transfer, recovery of atmospheric water and polymer templating. In literature, many papers investigated theoretically and experimentally in macro droplet growth with the size of millimeter scale of radius. However few papers about nanodroplet condensation are found in the literature especially theoretical work. In order to understand the droplet growth in nanoscale, we perform the numerical simulation work to study nanodroplet growth. We investigate and discuss the role of the droplet shape and monomer diffusion on drop growth and their effect on growth law. The effect of droplet shape is studied by doing parametric studies of contact angle and disjoining pressure magnitude. Besides, the effect of pinning and de-pinning behaviours is also studied. We investigate the axisymmetric homogeneous growth of 10–100 nm single water nanodroplet on a substrate surface. The main mechanism of droplet growth is attributed to the accumulation of laterally diffusing water monomers, formed by the absorption of water vapour in the environment onto the substrate. Under assumptions of quasi-steady thermodynamic equilibrium, the nanodroplet evolves according to the augmented Young–Laplace equation. Using continuum theory, we model the dynamics of nanodroplet growth including the coupled effects of disjoining pressure, contact angle and monomer diffusion with the assumption of constant flux of water monomers at the far field. The simulation result is validated by comparing with the published experimental result. For the case of nanodroplet growth with constant contact angle, our numerical results show that the initial droplet growth is transient by monomer diffusion. When the flux at the far field is small, at the beginning, the droplet grows by the diffusion of initially available water monomers on the substrate and after that by the flux at the far field. In the steady late growth rate of droplet radius and droplet height follow a power law of 1/3, which is unaffected by the substrate disjoining pressure and contact angle. However, it is found that the droplet grows faster in radial direction than high direction when disjoining pressure and contact angle increase. The simulation also shows the information of computational domain effect in the transient growth period. When the computational domain size is larger, the mass coming in the free substrate domain is higher. So the mass coming in the droplet is also higher. The droplet grows and reaches the steady state faster. For the case of pinning and de-pinning droplet growth, the simulation shows that the disjoining pressure does not affect the droplet radius growth law 1/3 in steady state. However the disjoining pressure modifies the growth rate of the droplet height, which then follows a power law of 1/4. We demonstrate how spatial depletion of monomers could lead to a growth arrest of the nanodroplet, as observed experimentally.Keywords: augmented young-laplace equation, contact angle, disjoining pressure, nanodroplet growth
Procedia PDF Downloads 2736290 Variation in N₂ Fixation and N Contribution by 30 Groundnut (Arachis hypogaea L.) Varieties Grown in Blesbokfontein Mpumalanga Province, South Africa
Authors: Titus Y. Ngmenzuma, Cherian. Mathews, Feilx D. Dakora
Abstract:
In Africa, poor nutrient availability, particularly N and P, coupled with low soil moisture due to erratic rainfall, constitutes the major crop production constraints. Although inorganic fertilizers are an option for meeting crop nutrient requirements for increased grain yield, the high cost and scarcity of inorganic inputs make them inaccessible to resource-poor farmers in Africa. Because crops grown on such nutrient-poor soils are micronutrient deficient, incorporating N₂-fixing legumes into cropping systems can sustainably improve crop yield and nutrient accumulation in the grain. In Africa, groundnut can easily form an effective symbiosis with native soil rhizobia, leading to marked N contribution in cropping systems. In this study, field experiments were conducted at Blesbokfontein in Mpumalanga Province to assess N₂ fixation and N contribution by 30 groundnut varieties during the 2018/2019 planting season using the ¹⁵N natural abundance technique. The results revealed marked differences in shoot dry matter yield, symbiotic N contribution, soil N uptake and grain yield among the groundnut varieties. The percent N derived from fixation ranged from 37 to 44% for varieties ICGV131051 and ICGV13984. The amount of N-fixed ranged from 21 to 58 kg/ha for varieties Chinese and IS-07273, soil N uptake from 31 to 80 kg/ha for varieties IS-07947 and IS-07273, and grain yield from 193 to 393 kg/ha for varieties ICGV15033 and ICGV131096, respectively. Compared to earlier studies on groundnut in South Africa, this study has shown low N₂ fixation and N contribution to the cropping systems, possibly due to environmental factors such as low soil moisture. Because the groundnut varieties differed in their growth, symbiotic performance and grain yield, more field testing is required over a range of differing agro-ecologies to identify genotypes suitable for different cropping environmentsKeywords: ¹⁵N natural abundance, percent N derived from fixation, amount of N-fixed, grain yield
Procedia PDF Downloads 1886289 Development of Pothole Management Method Using Automated Equipment with Multi-Beam Sensor
Authors: Sungho Kim, Jaechoul Shin, Yujin Baek, Nakseok Kim, Kyungnam Kim, Shinhaeng Jo
Abstract:
The climate change and increase in heavy traffic have been accelerating damages that cause the problems such as pothole on asphalt pavement. Pothole causes traffic accidents, vehicle damages, road casualties and traffic congestion. A quick and efficient maintenance method is needed because pothole is caused by stripping and accelerates pavement distress. In this study, we propose a rapid and systematic pothole management by developing a pothole automated repairing equipment including a volume measurement system of pothole. Three kinds of cold mix asphalt mixture were investigated to select repair materials. The materials were evaluated for satisfaction with quality standard and applicability to automated equipment. The volume measurement system of potholes was composed of multi-sensor that are combined with laser sensor and ultrasonic sensor and installed in front and side of the automated repair equipment. An algorithm was proposed to calculate the amount of repair material according to the measured pothole volume, and the system for releasing the correct amount of material was developed. Field test results showed that the loss of repair material amount could be reduced from approximately 20% to 6% per one point of pothole. Pothole rapid automated repair equipment will contribute to improvement on quality and efficient and economical maintenance by not only reducing materials and resources but also calculating appropriate materials. Through field application, it is possible to improve the accuracy of pothole volume measurement, to correct the calculation of material amount, and to manage the pothole data of roads, thereby enabling more efficient pavement maintenance management. Acknowledgment: The author would like to thank the MOLIT(Ministry of Land, Infrastructure, and Transport). This work was carried out through the project funded by the MOLIT. The project name is 'development of 20mm grade for road surface detecting roadway condition and rapid detection automation system for removal of pothole'.Keywords: automated equipment, management, multi-beam sensor, pothole
Procedia PDF Downloads 2246288 99mTc Scintimammography in an Equivocal Breast Lesion
Authors: Malak Shawky Matter Elyas
Abstract:
Introduction: Early detection of breast cancer is the main tool to decrease morbidity and mortality rates. Many diagnostic tools are used, such as mammograms, ultrasound and magnetic resonance imaging, but none of them is conclusive, especially in very small sizes, less than 1 cm. So, there is a need for more accurate tools. Patients and methods: This study involved 13 patients with different breast lesions. 6 Patients had breast cancer, and one of them had metastatic axillary lymph nodes without clinically nor mammographically detected breast mass proved by biopsy and histopathology. Of the other 7 Patients, 4 of them had benign breast lesions proved by biopsy and histopathology, and 3 Patients showed Equivocal breast lesions on a mammogram. A volume of 370-444Mbq of (99m) Tc/ bombesin was injected. Dynamic 1-min images by Gamma Camera were taken for 20 minutes immediately after injection in the anterior view. Thereafter, two static images in anterior and prone lateral views by Gamma Camera were taken for 5 minutes. Finally, single-photon emission computed tomography images were taken for each patient. The definitive diagnosis was based on biopsy and histopathology. Results: 6 Patients with breast cancer proved by biopsy and histopathology showed Positive findings on Sestamibi (Scintimammography). 1 out of 4 Patients with benign breast lesions proved by biopsy and histopathology showed Positive findings on Sestamibi (Scintimammography) while the other 3 Patients showed Negative findings on Sestamibi. 3 Patients out of 3 Patients with equivocal breast findings on mammogram showed Positive Findings on Sestamibi (Scintimammography) and proved by biopsy and histopathology. Conclusions: While we agree that Scintimammography will not replace mammograms as a mass screening tool, we believe that many patients will benefit from Scintimammography, especially women with dense breast tissues and in the presence of breast implants that are difficult to diagnose by mammogram, wherein its sensitivity is low and in women with metastatic axillary lymph nodes without clinically nor mammographically findings. We can use Scintimammography in sentinel lymph node mapping as a more accurate tool, especially since it is non-invasive.Keywords: breast., radiodiagnosis, lifestyle, surgery
Procedia PDF Downloads 326287 Graphene Transistors Based Microwave Amplifiers
Authors: Pejman Hosseinioun, Ali Safari, Hamed Sarbazi
Abstract:
Graphene is a one-atom-thick sheet of carbon with numerous impressive properties. It is a promising material for future high-speed nanoelectronics due to its intrinsic superior carrier mobility and very high saturation velocity. These exceptional carrier transport properties suggest that graphene field effect transistors (G-FETs) can potentially outperform other FET technologies. In this paper, detailed discussions are introduced for Graphene Transistors Based Microwave Amplifiers.Keywords: graphene, microwave FETs, microwave amplifiers, transistors
Procedia PDF Downloads 4936286 Drug-Based Nanoparticles: Comparative Study of the Effect Drug Type on Release Kinetics and Cell Viability
Authors: Chukwudalu C. Nwazojie, Wole W. Soboyejo, John Obayemi, Ali Salifu Azeko, Sandra M. Jusu, Chinyerem M. Onyekanne
Abstract:
The conventional methods for the diagnosis and treatment of breast cancer include bulk systematic mammography, ultrasound, dynamic contrast-enhanced fast 3D gradient-echo (GRE) magnetic resonance imaging (MRI), surgery, chemotherapy, and radiotherapy. However, nanoparticles and drug-loaded polymer microspheres for disease (cancer) targeting and treatment have enormous potential to enhance the approaches that are used today. The goal is to produce an implantable biomedical device for localized breast cancer drug delivery within Africa and the world. The main advantage of localized delivery is that it reduces the amount of drug that is needed to have a therapeutic effect. Polymer blends of poly (D,L-lactide-co-glycolide) (PLGA) and polycaprolactone (PCL), which are biodegradable, is used as a drug excipient. This work focuses on the development of PLGA-PCL (poly (D,L-lactide-co-glycolide) (PLGA) blended with based injectable drug microspheres and are loaded with anticancer drugs (prodigiosin (PG), and paclitaxel (PTX) control) and also the conjugated forms of the drug functionalized with LHRH (luteinizing hormone-releasing hormone) (PG-LHRH, and PTX- LHRH control), using a single-emulsion solvent evaporation technique. The encapsulation was done in the presence of PLGA-PCL (as a polymer matrix) and poly-(vinyl alcohol) (PVA) (as an emulsifier). Comparative study of the various drugs release kinetics and degradation mechanisms of the PLGA-PCL with an encapsulated drug is achieved, and the implication of this study is for the potential application of prodigiosin PLGA-PCL loaded microparticles for controlled delivery of cancer drug and treatment to prevent the regrowth or locoregional recurrence, following surgical resection of triple-negative breast tumor.Keywords: cancer, polymers, drug kinetics, nanoparticles
Procedia PDF Downloads 1006285 Ectoine: A Compatible Solute in Radio-Halophilic Stenotrophomonas sp. WMA-LM19 Strain to Prevent Ultraviolet-Induced Protein Damage
Authors: Wasim Sajjad, Manzoor Ahmad, Sundas Qadir, Muhammad Rafiq, Fariha Hasan, Richard Tehan, Kerry L. McPhail, Aamer Ali Shah
Abstract:
Aim: This study aims to investigate the possible radiation protective role of a compatible solute in the tolerance of radio-halophilic bacterium against stresses, like desiccation and exposure to ionizing radiation. Methods and Results: Nine different radio-resistant bacteria were isolated from desert soil, where strain WMA-LM19 was chosen for detailed studies on the basis of its high tolerance for ultraviolet radiation among all these isolates. 16S rRNA gene sequencing indicated that the bacterium was closely related to Stenotrophomonas sp. (KT008383). A bacterial milking strategy was applied for extraction of intracellular compatible solutes in 70% (v/v) ethanol, which were purified by high-performance liquid chromatography (HPLC). The compound was characterized as ectoine by 1H and 13C nuclear magnetic resonance (NMR), and mass spectrometry (MS). Ectoine demonstrated more efficient preventive activity (54.80%) to erythrocyte membranes and also inhibited oxidative damage to proteins and lipids in comparison to the standard ascorbic acid. Furthermore, a high level of ectoine-mediated protection of bovine serum albumin against ionizing radiation (1500-2000 Jm-2) was observed, as indicated by sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) analysis. Conclusion: The results indicated that ectoine can be used as a potential mitigator and radio-protective agent to overcome radiation- and salinity-mediated oxidative damage in extreme environments. Significance and Impact of the Study: This study shows that ectoine from radio-halophiles can be used as a potential source in topical creams as sunscreen. The investigation of ectoine as UV protectant also changes the prospective that radiation resistance is specific only to molecular adaptation.Keywords: ectoine, anti-oxidant, stenotrophomonas sp., ultraviolet radiation
Procedia PDF Downloads 2096284 System Dietadhoc® - A Fusion of Human-Centred Design and Agile Development for the Explainability of AI Techniques Based on Nutritional and Clinical Data
Authors: Michelangelo Sofo, Giuseppe Labianca
Abstract:
In recent years, the scientific community's interest in the exploratory analysis of biomedical data has increased exponentially. Considering the field of research of nutritional biologists, the curative process, based on the analysis of clinical data, is a very delicate operation due to the fact that there are multiple solutions for the management of pathologies in the food sector (for example can recall intolerances and allergies, management of cholesterol metabolism, diabetic pathologies, arterial hypertension, up to obesity and breathing and sleep problems). In this regard, in this research work a system was created capable of evaluating various dietary regimes for specific patient pathologies. The system is founded on a mathematical-numerical model and has been created tailored for the real working needs of an expert in human nutrition using the human-centered design (ISO 9241-210), therefore it is in step with continuous scientific progress in the field and evolves through the experience of managed clinical cases (machine learning process). DietAdhoc® is a decision support system nutrition specialists for patients of both sexes (from 18 years of age) developed with an agile methodology. Its task consists in drawing up the biomedical and clinical profile of the specific patient by applying two algorithmic optimization approaches on nutritional data and a symbolic solution, obtained by transforming the relational database underlying the system into a deductive database. For all three solution approaches, particular emphasis has been given to the explainability of the suggested clinical decisions through flexible and customizable user interfaces. Furthermore, the system has multiple software modules based on time series and visual analytics techniques that allow to evaluate the complete picture of the situation and the evolution of the diet assigned for specific pathologies.Keywords: medical decision support, physiological data extraction, data driven diagnosis, human centered AI, symbiotic AI paradigm
Procedia PDF Downloads 246283 Distribution of Micro Silica Powder at a Ready Mixed Concrete
Authors: Kyong-Ku Yun, Dae-Ae Kim, Kyeo-Re Lee, Kyong Namkung, Seung-Yeon Han
Abstract:
Micro silica is collected as a by-product of the silicon and ferrosilicon alloy production in electric arc furnace using highly pure quartz, wood chips, coke and the like. It consists of about 85% of silicon which has spherical particles with an average particle size of 150 μm. The bulk density of micro silica varies from 150 to 700kg/m^3 and the fineness ranges from 150,000 to 300,000cm^2/g. An amorphous structure with a high silicon oxide content of micro silica induces an active reaction with calcium hydroxide (Ca(OH)₂) generated by the cement hydrate of a large surface area (about 20 m^² / g), and they are also known to form calcium, silicate, hydrate conjugate (C-S-H). Micro silica tends to act as a filler because of the fine particles and the spherical shape. These particles do not get covered by water and they fit well in the space between the relatively rough cement grains which does not freely fluidize concrete. On the contrary, water demand increases since micro silica particles have a tendency to absorb water because of the large surface area. The overall effect of micro silica depends on the amount of micro silica added with other parameters in the water-(cement + micro silica) ratio, and the availability of superplasticizer. In this research, it was studied on cellular sprayed concrete. This method involves a direct re-production of ready mixed concrete into a high performance at a job site. It could reduce the cost of construction by an adding a cellular and a micro silica into a ready mixed concrete truck in a field. Also, micro silica which is difficult with mixing due to high fineness in the field can be added and dispersed in concrete by increasing the fluidity of ready mixed concrete through the surface activity of cellular. Increased air content is converged to a certain level of air content by spraying and it also produces high-performance concrete by remixing of powders in the process of spraying. As it does not use a field mixing equipment the cost of construction decrease and it can be constructed after installing special spray machine in a commercial pump car. Therefore, use of special equipment is minimized, providing economic feasibility through the utilization of existing equipment. This study was carried out to evaluate a highly reliable method of confirming dispersion through a high performance cellular sprayed concrete. A mixture of 25mm coarse aggregate and river sand was applied to the concrete. In addition, by applying silica fume and foam, silica fume dispersion is confirmed in accordance with foam mixing, and the mean and standard deviation is obtained. Then variation coefficient is calculated to finally evaluate the dispersion. Comparison and analysis of before and after spraying were conducted on the experiment variables of 21L, 35L foam for each 7%, 14% silica fume respectively. Taking foam and silica fume as variables, the experiment proceed. Casting a specimen for each variable, a five-day sample is taken from each specimen for EDS test. In this study, it was examined by an experiment materials, plan and mix design, test methods, and equipment, for the evaluation of dispersion in accordance with micro silica and foam.Keywords: micro silica, distribution, ready mixed concrete, foam
Procedia PDF Downloads 2196282 NFC Kenaf Core Graphene Paper: In-situ Method Application
Authors: M. A. Izzati, R. Rosazley, A. W. Fareezal, M. Z. Shazana, I. Rushdan, M. Jani
Abstract:
Ultrasonic probe were using to produce nanofibrillated cellulose (NFC) kenaf core. NFC kenaf core and graphene was mixed using in-situ method with the 5V voltage for 24 hours. The resulting NFC graphene paper was characterized by field emission scanning electron microscopy (FESEM), fourier transformed infrared (FTIR) spectra and thermogavimetric analysis (TGA). The properties of NFC kenaf core graphene paper are compared with properties of pure NFC kenaf core paper.Keywords: NFC, kenaf core, graphene, in-situ method
Procedia PDF Downloads 3946281 Chiral Molecule Detection via Optical Rectification in Spin-Momentum Locking
Authors: Jessie Rapoza, Petr Moroshkin, Jimmy Xu
Abstract:
Chirality is omnipresent, in nature, in life, and in the field of physics. One intriguing example is the homochirality that has remained a great secret of life. Another is the pairs of mirror-image molecules – enantiomers. They are identical in atomic composition and therefore indistinguishable in the scalar physical properties. Yet, they can be either therapeutic or toxic, depending on their chirality. Recent studies suggest a potential link between abnormal levels of certain D-amino acids and some serious health impairments, including schizophrenia, amyotrophic lateral sclerosis, and potentially cancer. Although indistinguishable in their scalar properties, the chirality of a molecule reveals itself in interaction with the surrounding of a certain chirality, or more generally, a broken mirror-symmetry. In this work, we report on a system for chiral molecule detection, in which the mirror-symmetry is doubly broken, first by asymmetric structuring a nanopatterned plasmonic surface than by the incidence of circularly polarized light (CPL). In this system, the incident circularly-polarized light induces a surface plasmon polariton (SPP) wave, propagating along the asymmetric plasmonic surface. This SPP field itself is chiral, evanescently bound to a near-field zone on the surface (~10nm thick), but with an amplitude greatly intensified (by up to 104) over that of the incident light. It hence probes just the molecules on the surface instead of those in the volume. In coupling to molecules along its path on the surface, the chiral SPP wave favors one chirality over the other, allowing for chirality detection via the change in an optical rectification current measured at the edges of the sample. The asymmetrically structured surface converts the high-frequency electron plasmonic-oscillations in the SPP wave into a net DC drift current that can be measured at the edge of the sample via the mechanism of optical rectification. The measured results validate these design concepts and principles. The observed optical rectification current exhibits a clear differentiation between a pair of enantiomers. Experiments were performed by focusing a 1064nm CW laser light at the sample - a gold grating microchip submerged in an approximately 1.82M solution of either L-arabinose or D-arabinose and water. A measurement of the current output was then recorded under both rights and left circularly polarized lights. Measurements were recorded at various angles of incidence to optimize the coupling between the spin-momentums of the incident light and that of the SPP, that is, spin-momentum locking. In order to suppress the background, the values of the photocurrent for the right CPL are subtracted from those for the left CPL. Comparison between the two arabinose enantiomers reveals a preferential signal response of one enantiomer to left CPL and the other enantiomer to right CPL. In sum, this work reports on the first experimental evidence of the feasibility of chiral molecule detection via optical rectification in a metal meta-grating. This nanoscale interfaced electrical detection technology is advantageous over other detection methods due to its size, cost, ease of use, and integration ability with read-out electronic circuits for data processing and interpretation.Keywords: Chirality, detection, molecule, spin
Procedia PDF Downloads 926280 Algorithm for Improved Tree Counting and Detection through Adaptive Machine Learning Approach with the Integration of Watershed Transformation and Local Maxima Analysis
Authors: Jigg Pelayo, Ricardo Villar
Abstract:
The Philippines is long considered as a valuable producer of high value crops globally. The country’s employment and economy have been dependent on agriculture, thus increasing its demand for the efficient agricultural mechanism. Remote sensing and geographic information technology have proven to effectively provide applications for precision agriculture through image-processing technique considering the development of the aerial scanning technology in the country. Accurate information concerning the spatial correlation within the field is very important for precision farming of high value crops, especially. The availability of height information and high spatial resolution images obtained from aerial scanning together with the development of new image analysis methods are offering relevant influence to precision agriculture techniques and applications. In this study, an algorithm was developed and implemented to detect and count high value crops simultaneously through adaptive scaling of support vector machine (SVM) algorithm subjected to object-oriented approach combining watershed transformation and local maxima filter in enhancing tree counting and detection. The methodology is compared to cutting-edge template matching algorithm procedures to demonstrate its effectiveness on a demanding tree is counting recognition and delineation problem. Since common data and image processing techniques are utilized, thus can be easily implemented in production processes to cover large agricultural areas. The algorithm is tested on high value crops like Palm, Mango and Coconut located in Misamis Oriental, Philippines - showing a good performance in particular for young adult and adult trees, significantly 90% above. The s inventories or database updating, allowing for the reduction of field work and manual interpretation tasks.Keywords: high value crop, LiDAR, OBIA, precision agriculture
Procedia PDF Downloads 4026279 Effect of Toxic Metals Exposure on Rat Behavior and Brain Morphology: Arsenic, Manganese
Authors: Tamar Bikashvili, Tamar Lordkipanidze, Ilia Lazrishvili
Abstract:
Heavy metals remain one of serious environmental problems due to their toxic effects. The effect of arsenic and manganese compounds on rat behavior and neuromorphology was studied. Wistar rats were assigned to four groups: rats in control group were given regular water, while rats in other groups drank water with final manganese concentration of 10 mg/L (group A), 20 mg/L (group B) and final arsenic concentration 68 mg/L (group C), respectively, for a month. To study exploratory and anxiety behavior and also to evaluate aggressive performance in “home cage” rats were tested in “Open Field” and to estimate learning and memory status multi-branched maze was used. Statistically significant increase of motor and oriental-searching activity in experimental groups was revealed by an open field test, which was expressed in increase of number of lines crossed, rearing and hole reflexes. Obtained results indicated the suppression of fear in rats exposed to manganese. Specifically, this was estimated by the frequency of getting to the central part of the open field. Experiments revealed that 30-day exposure to 10 mg/ml manganese did not stimulate aggressive behavior in rats, while exposure to the higher dose (20 mg/ml), 37% of initially non-aggressive animals manifested aggressive behavior. Furthermore, 25% of rats were extremely aggressive. Obtained data support the hypothesis that excess manganese in the body is one of the immediate causes of enhancement of interspecific predatory aggressive and violent behavior in rats. It was also discovered that manganese intoxication produces non-reversible severe learning disability and insignificant, reversible memory disturbances. Studies of rodents exposed to arsenic also revealed changes in the learning process. As it is known, the distribution of metal ions differs in various brain regions. The principle manganese accumulation was observed in the hippocampus and in the neocortex, while arsenic was predominantly accumulated in nucleus accumbens, striatum, and cortex. These brain regions play an important role in the regulation of emotional state and motor activity. Histopathological analyzes of brain sections illustrated two morphologically distinct altered phenotypes of neurons: (1) shrunk cells with indications of apoptosis - nucleus and cytoplasm were very difficult to be distinguished, the integrity of neuronal cytoplasm was not disturbed; and (2) swollen cells - with indications of necrosis. Pyknotic nucleus, plasma membrane disruption and cytoplasmic vacuoles were observed in swollen neurons and they were surrounded by activated gliocytes. It’s worth to mention that in the cortex the majority of damaged neurons were apoptotic while in subcortical nuclei –neurons were mainly necrotic. Ultrastructural analyses demonstrated that all cell types in the cortex and the nucleus caudatus represent destructed mitochondria, widened neurons’ vacuolar system profiles, increased number of lysosomes and degeneration of axonal endings.Keywords: arsenic, manganese, behavior, learning, neuron
Procedia PDF Downloads 359