Search results for: NSGA-II Constraints handling.
356 New Teaching Tools for a Modern Representation of Chemical Bond in the Course of Food Science
Authors: Nicola G. G. Cecca
Abstract:
In Italian IPSSEOAs, high schools that give a vocational education to students that will work in the field of Enogastronomy and Hotel Management, the course of Food Science allows the students to start and see food as a mixture of substances that they will transform during their profession. These substances are characterized not only by a chemical composition but also by a molecular structure that makes them nutritionally active. But the increasing number of new products proposed by Food Industry, the modern techniques of production and transformation, the innovative preparations required by customers have made many information reported in the most wide spread Food Science textbooks not up-to-date or too poor for the people who will work in catering sector. Often Authors offer information aged to Bohr’s Atomic Model and to the ‘Octet Rule’ proposed by G.N. Lewis to describe the Chemical Bond, without giving any reference to new as Orbital Atomic Model and Molecular Orbital Theory that, in the meantime, start to be old themselves. Furthermore, this antiquated information precludes an easy understanding of a wide range of properties of nutritive substances and many reactions in which the food constituents are involved. In this paper, our attention is pointed out to use GEOMAG™ to represent the dynamics with which the chemical bond is formed during the synthesis of the molecules. GEOMAG™ is a toy, produced by the Swiss Company Geomagword S.A., pointed to stimulate in children, aged between 6-10 years, their fantasy and their handling ability and constituted by metallic spheres and metallic magnetic bars coated by coloured plastic materials. The simulation carried out with GEOMAG™ is based on the similitude existing between the Coulomb’s force and the magnetic attraction’s force and in particular between the formulae with which they are calculated. The electrostatic force (F in Newton) that allows the formation of the chemical bond can be calculated by mean Fc = kc q1 q2/d2 where: q1 e q2 are the charge of particles [in Coulomb], d is the distance between the particles [in meters] and kc is the Coulomb’s constant. It is surprising to observe that the attraction’s force (Fm) acting between the magnetic extremities of GEOMAG™ used to simulate the chemical bond can be calculated in the same way by using the formula Fm = km m1 m2/d2 where: m1 e m2 represent the strength of the poles [A•m], d is the distance between the particles [m], km = μ/4π in which μ is the magnetic permeability of medium [N•A-2]. The magnetic attraction can be tested by students by trying to keep the magnetic elements of GEOMAG™ separate by hands or trying to measure by mean an appropriate dynamometric system. Furthermore, by using a dynamometric system to measure the magnetic attraction between the GEOMAG™ elements is possible draw a graphic F=f(d) to verify that the curve obtained during the simulation is very similar to that one hypnotized, around the 1920’s by Linus Pauling to describe the formation of H2+ in according with Molecular Orbital Theory.Keywords: chemical bond, molecular orbital theory, magnetic attraction force, GEOMAG™
Procedia PDF Downloads 267355 Groundwater Potential Delineation Using Geodetector Based Convolutional Neural Network in the Gunabay Watershed of Ethiopia
Authors: Asnakew Mulualem Tegegne, Tarun Kumar Lohani, Abunu Atlabachew Eshete
Abstract:
Groundwater potential delineation is essential for efficient water resource utilization and long-term development. The scarcity of potable and irrigation water has become a critical issue due to natural and anthropogenic activities in meeting the demands of human survival and productivity. With these constraints, groundwater resources are now being used extensively in Ethiopia. Therefore, an innovative convolutional neural network (CNN) is successfully applied in the Gunabay watershed to delineate groundwater potential based on the selected major influencing factors. Groundwater recharge, lithology, drainage density, lineament density, transmissivity, and geomorphology were selected as major influencing factors during the groundwater potential of the study area. For dataset training, 70% of samples were selected and 30% were used for serving out of the total 128 samples. The spatial distribution of groundwater potential has been classified into five groups: very low (10.72%), low (25.67%), moderate (31.62%), high (19.93%), and very high (12.06%). The area obtains high rainfall but has a very low amount of recharge due to a lack of proper soil and water conservation structures. The major outcome of the study showed that moderate and low potential is dominant. Geodetoctor results revealed that the magnitude influences on groundwater potential have been ranked as transmissivity (0.48), recharge (0.26), lineament density (0.26), lithology (0.13), drainage density (0.12), and geomorphology (0.06). The model results showed that using a convolutional neural network (CNN), groundwater potentiality can be delineated with higher predictive capability and accuracy. CNN-based AUC validation platform showed that 81.58% and 86.84% were accrued from the accuracy of training and testing values, respectively. Based on the findings, the local government can receive technical assistance for groundwater exploration and sustainable water resource development in the Gunabay watershed. Finally, the use of a detector-based deep learning algorithm can provide a new platform for industrial sectors, groundwater experts, scholars, and decision-makers.Keywords: CNN, geodetector, groundwater influencing factors, Groundwater potential, Gunabay watershed
Procedia PDF Downloads 21354 Transnational Initiatives, Local Perspectives: The Potential of Australia-Asia BRIDGE School Partnerships Project to Support Teacher Professional Development in India
Authors: Atiya Khan
Abstract:
Recent research on the condition of school education in India has reaffirmed the importance of quality teacher professional development, especially in light of the rapid changes in teaching methods, learning theories, curriculum, and major shifts in information and technology that education systems are experiencing around the world. However, the quality of programs of teacher professional development in India is often uneven, in some cases non-existing. The educational authorities in India have long recognized this and have developed a range of programs to assist in-service teacher education. But, these programs have been mostly inadequate at improving the quality of teachers in India. Policy literature and reports indicate that the unevenness of these programs and more generally the lack of quality teacher professional development in India are due to factors such as a large number of teachers, budgetary constraints, top-down decision making, teacher overload, lack of infrastructure, and little or no follow-up. The disparity between the government stated goals for quality teacher professional development in India and its inability to meet the learning needs of teachers suggests that new interventions are needed. The realization that globalization has brought about an increase in the social, cultural, political and economic interconnectedness between countries has also given rise to transnational opportunities for education systems, such as India’s, aiming to build their capacity to support teacher professional development. Moreover, new developments in communication technologies seem to present a plausible means of achieving high-quality professional development for teachers through the creation of social learning spaces, such as transnational learning networks. This case study investigates the potential of one such transnational learning network to support the quality of teacher professional development in India, namely the Australia-Asia BRIDGE School Partnerships Project. It explores the participation of some fifteen teachers and their principals from BRIDGE participating schools in Delhi region of India; focusing on their professional development expectations from the BRIDGE program and account for their experiences in the program, in order to determine the program’s potential for the professional development of teachers in this study.Keywords: case study, Australia-Asia BRIDGE Project, teacher professional development, transnational learning networks
Procedia PDF Downloads 266353 Identification of Architectural Design Error Risk Factors in Construction Projects Using IDEF0 Technique
Authors: Sahar Tabarroki, Ahad Nazari
Abstract:
The design process is one of the most key project processes in the construction industry. Although architects have the responsibility to produce complete, accurate, and coordinated documents, architectural design is accompanied by many errors. A design error occurs when the constraints and requirements of the design are not satisfied. Errors are potentially costly and time-consuming to correct if not caught early during the design phase, and they become expensive in either construction documents or in the construction phase. The aim of this research is to identify the risk factors of architectural design errors, so identification of risks is necessary. First, a literature review in the design process was conducted and then a questionnaire was designed to identify the risks and risk factors. The questions in the form of the questionnaire were based on the “similar service description of study and supervision of architectural works” published by “Vice Presidency of Strategic Planning & Supervision of I.R. Iran” as the base of architects’ tasks. Second, the top 10 risks of architectural activities were identified. To determine the positions of possible causes of risks with respect to architectural activities, these activities were located in a design process modeled by the IDEF0 technique. The research was carried out by choosing a case study, checking the design drawings, interviewing its architect and client, and providing a checklist in order to identify the concrete examples of architectural design errors. The results revealed that activities such as “defining the current and future requirements of the project”, “studies and space planning,” and “time and cost estimation of suggested solution” has a higher error risk than others. Moreover, the most important causes include “unclear goals of a client”, “time force by a client”, and “lack of knowledge of architects about the requirements of end-users”. For error detecting in the case study, lack of criteria, standards and design criteria, and lack of coordination among them, was a barrier, anyway, “lack of coordination between architectural design and electrical and mechanical facility”, “violation of the standard dimensions and sizes in space designing”, “design omissions” were identified as the most important design errors.Keywords: architectural design, design error, risk management, risk factor
Procedia PDF Downloads 130352 Knowledge, Perceptions, and Barriers of Preconception Care among Healthcare Workers in Nigeria
Authors: Taiwo Hassanat Bawa-Muhammad, Opeoluwa Hope Adegoke
Abstract:
Introduction: This study aims to examine the knowledge and perceptions of preconception care among healthcare workers in Nigeria, recognizing its crucial role in ensuring safe pregnancies. Despite its significance, awareness of preconception care remains low in the country. The study seeks to assess the understanding of preconception services and identify the barriers that hinder their efficacy. Methods: Through semi-structured interviews, 129 healthcare workers across six states in Nigeria were interviewed between January and March 2023. The interviews explored the healthcare workers' knowledge of preconception care practices, the socio-cultural influences shaping decision-making, and the challenges that limit accessibility and utilization of preconception care services. Results: The findings reveal a limited knowledge of preconception care among healthcare workers, primarily due to inadequate information dissemination within the healthcare system. Additionally, cultural beliefs significantly influence perceptions surrounding preconception care. Furthermore, financial constraints, distance to healthcare facilities, and poor health infrastructure disproportionately restrict access to preconception services, particularly for vulnerable populations. The study also highlights insufficient skills and outdated training among healthcare workers regarding preconception guidance, primarily attributed to limited opportunities for professional development. Discussion: To improve preconception care in Nigeria, comprehensive education programs must be implemented, taking into account the societal influences that shape perceptions and behaviors. These programs should aim to dispel myths and promote evidence-based practices. Additionally, training healthcare workers and integrating preconception care services into primary care settings, with support from religious and community leaders, can help overcome barriers to access. Strategies should prioritize affordability while emphasizing the broader benefits of preconception care beyond fertility concerns alone. Lastly, widespread literacy campaigns utilizing trusted channels are crucial for effectively disseminating information and promoting the adoption of preconception practices in Nigeria.Keywords: preconception care, knowledge, healthcare workers, Nigeria, barriers, education, training
Procedia PDF Downloads 97351 Uncertainty Quantification of Corrosion Anomaly Length of Oil and Gas Steel Pipelines Based on Inline Inspection and Field Data
Authors: Tammeen Siraj, Wenxing Zhou, Terry Huang, Mohammad Al-Amin
Abstract:
The high resolution inline inspection (ILI) tool is used extensively in the pipeline industry to identify, locate, and measure metal-loss corrosion anomalies on buried oil and gas steel pipelines. Corrosion anomalies may occur singly (i.e. individual anomalies) or as clusters (i.e. a colony of corrosion anomalies). Although the ILI technology has advanced immensely, there are measurement errors associated with the sizes of corrosion anomalies reported by ILI tools due limitations of the tools and associated sizing algorithms, and detection threshold of the tools (i.e. the minimum detectable feature dimension). Quantifying the measurement error in the ILI data is crucial for corrosion management and developing maintenance strategies that satisfy the safety and economic constraints. Studies on the measurement error associated with the length of the corrosion anomalies (in the longitudinal direction of the pipeline) has been scarcely reported in the literature and will be investigated in the present study. Limitations in the ILI tool and clustering process can sometimes cause clustering error, which is defined as the error introduced during the clustering process by including or excluding a single or group of anomalies in or from a cluster. Clustering error has been found to be one of the biggest contributory factors for relatively high uncertainties associated with ILI reported anomaly length. As such, this study focuses on developing a consistent and comprehensive framework to quantify the measurement errors in the ILI-reported anomaly length by comparing the ILI data and corresponding field measurements for individual and clustered corrosion anomalies. The analysis carried out in this study is based on the ILI and field measurement data for a set of anomalies collected from two segments of a buried natural gas pipeline currently in service in Alberta, Canada. Data analyses showed that the measurement error associated with the ILI-reported length of the anomalies without clustering error, denoted as Type I anomalies is markedly less than that for anomalies with clustering error, denoted as Type II anomalies. A methodology employing data mining techniques is further proposed to classify the Type I and Type II anomalies based on the ILI-reported corrosion anomaly information.Keywords: clustered corrosion anomaly, corrosion anomaly assessment, corrosion anomaly length, individual corrosion anomaly, metal-loss corrosion, oil and gas steel pipeline
Procedia PDF Downloads 309350 A 'Systematic Literature Review' of Specific Types of Inventory Faced by the Management of Firms
Authors: Rui Brito
Abstract:
This contribution regards a literature review of inventory management that is a relevant topic for the firms, due to its important use of capital with implications in firm’s profitability within the complexity of a more competitive and globalized world. Firms look for small inventories in order to reduce holding costs, namely opportunity cost, warehousing and handling costs, deterioration and being out of style, but larger inventories are required by some reasons, such as customer service, ordering cost, transportation cost, supplier’s payment to reduce unit costs or to take advantage of price increase in the near future, and equipment setup cost. Thus, management shall address a trade-off between small inventories and larger inventories. This literature review concerns three types of inventory (spare parts, safety stock, and vendor) whose management usually is beyond the scope of logistics. The applied methodology consisted of an online search of databases regarding scientific documents in English, namely Elsevier, Springer, Emerald, Wiley, and Taylor & Francis, but excluding books except if edited, using search engines, such as Google Scholar and B-on. The search was based on three keywords/strings (themes) which had to be included just as in the article title, suggesting themes were very relevant to the researchers. The whole search period was between 2009 and 2018 with the aim of collecting between twenty and forty studies considered relevant within each of the key words/strings specified. Documents were sorted by relevance and to prevent the exclusion of the more recent articles, based on lower quantity of citations partially due to less time to be cited in new research articles, the search period was divided into two sub-periods (2009-2015 and 2016-2018). The number of surveyed articles by theme showed a variation from 40 to 200 and the number of citations of those articles showed a wider variation from 3 to 216. Selected articles from the three themes were analyzed and the first seven of the first sub-period and the first three of the second sub-period with more citations were read in full to make a synopsis of each article. Overall, the findings show that the majority of article types were models, namely mathematical, although with different sub-types for each theme. Almost all articles suggest further studies, with some mentioning it for their own author(s), which widen the diversity of the previous research. Identified research gaps concern the use of surveys to know which are the models more used by firms, the reasons for not using the models with more performance and accuracy, and which are the satisfaction levels with the outcomes of the inventories management and its effect on the improvement of the firm’s overall performance. The review ends with the limitations and contributions of the study.Keywords: inventory management, safety stock, spare parts inventory, vendor managed inventory
Procedia PDF Downloads 96349 Promoting Girls’ and Women’s Right to Education: Challenges and Strategies
Authors: Kwizera Mireille, Kharesh Ahmed Al-Khadher
Abstract:
This paper explores the critical issue of girls' and women's right to education, exploring the challenges they face in accessing and benefiting from quality education. Gender disparities in education have persisted globally, hindering social progress and sustainable development. The fundamental importance of education in empowering individuals and promoting gender equality is acknowledged, making it imperative to address the disparities that hinder girls' and women's educational opportunities. The paper discusses various factors contributing to these disparities, including cultural norms(common in third-world countries), socio-economic constraints, and systemic biases. Drawing on a wide range of scholarly sources, empirical studies, and reports from international organizations, this paper highlights the broader societal benefits of educating girls and women, ranging from improved health outcomes to enhanced economic development and greater social and political participation. The paper further outlines strategies and initiatives aimed at overcoming these challenges. These include policy interventions, community-based programs, and international collaborations that work towards eliminating gender-based discrimination in educational settings. The paper emphasizes the significance of not only ensuring access but also fostering an inclusive and safe learning environment that encourages girls and women to thrive academically and personally. By analyzing successful case studies and best practices from around the world, the paper offers insights into effective approaches that can be adopted to enhance girls' and women's right to education globally. Furthermore, it emphasizes the importance of raising awareness of girl's and women's education. In conclusion, this paper underscores the urgency of prioritizing and protecting the educational rights of girls and women's right to education as a fundamental human right and catalyst for gender equality. It calls for a concerted effort from governments, NGOs, educational institutions, and society as a whole to create an equitable and empowering educational landscape that contributes to gender equality and sustainable development.Keywords: empowerment, gender equality, inclusive education, right to education
Procedia PDF Downloads 68348 A Furniture Industry Concept for a Sustainable Generative Design Platform Employing Robot Based Additive Manufacturing
Authors: Andrew Fox, Tao Zhang, Yuanhong Zhao, Qingping Yang
Abstract:
The furniture manufacturing industry has been slow in general to adopt the latest manufacturing technologies, historically relying heavily upon specialised conventional machinery. This approach not only requires high levels of specialist process knowledge, training, and capital investment but also suffers from significant subtractive manufacturing waste and high logistics costs due to the requirement for centralised manufacturing, with high levels of furniture product not re-cycled or re-used. This paper aims to address the problems by introducing suitable digital manufacturing technologies to create step changes in furniture manufacturing design, as the traditional design practices have been reported as building in 80% of environmental impact. In this paper, a 3D printing robot for furniture manufacturing is reported. The 3D printing robot mainly comprises a KUKA industrial robot, an Arduino microprocessor, and a self-assembled screw fed extruder. Compared to traditional 3D printer, the 3D printing robot has larger motion range and can be easily upgraded to enlarge the maximum size of the printed object. Generative design is also investigated in this paper, aiming to establish a combined design methodology that allows assessment of goals, constraints, materials, and manufacturing processes simultaneously. ‘Matrixing’ for part amalgamation and product performance optimisation is enabled. The generative design goals of integrated waste reduction increased manufacturing efficiency, optimised product performance, and reduced environmental impact institute a truly lean and innovative future design methodology. In addition, there is massive future potential to leverage Single Minute Exchange of Die (SMED) theory through generative design post-processing of geometry for robot manufacture, resulting in ‘mass customised’ furniture with virtually no setup requirements. These generatively designed products can be manufactured using the robot based additive manufacturing. Essentially, the 3D printing robot is already functional; some initial goals have been achieved and are also presented in this paper.Keywords: additive manufacturing, generative design, robot, sustainability
Procedia PDF Downloads 131347 A Methodology Based on Image Processing and Deep Learning for Automatic Characterization of Graphene Oxide
Authors: Rafael do Amaral Teodoro, Leandro Augusto da Silva
Abstract:
Originated from graphite, graphene is a two-dimensional (2D) material that promises to revolutionize technology in many different areas, such as energy, telecommunications, civil construction, aviation, textile, and medicine. This is possible because its structure, formed by carbon bonds, provides desirable optical, thermal, and mechanical characteristics that are interesting to multiple areas of the market. Thus, several research and development centers are studying different manufacturing methods and material applications of graphene, which are often compromised by the scarcity of more agile and accurate methodologies to characterize the material – that is to determine its composition, shape, size, and the number of layers and crystals. To engage in this search, this study proposes a computational methodology that applies deep learning to identify graphene oxide crystals in order to characterize samples by crystal sizes. To achieve this, a fully convolutional neural network called U-net has been trained to segment SEM graphene oxide images. The segmentation generated by the U-net is fine-tuned with a standard deviation technique by classes, which allows crystals to be distinguished with different labels through an object delimitation algorithm. As a next step, the characteristics of the position, area, perimeter, and lateral measures of each detected crystal are extracted from the images. This information generates a database with the dimensions of the crystals that compose the samples. Finally, graphs are automatically created showing the frequency distributions by area size and perimeter of the crystals. This methodological process resulted in a high capacity of segmentation of graphene oxide crystals, presenting accuracy and F-score equal to 95% and 94%, respectively, over the test set. Such performance demonstrates a high generalization capacity of the method in crystal segmentation, since its performance considers significant changes in image extraction quality. The measurement of non-overlapping crystals presented an average error of 6% for the different measurement metrics, thus suggesting that the model provides a high-performance measurement for non-overlapping segmentations. For overlapping crystals, however, a limitation of the model was identified. To overcome this limitation, it is important to ensure that the samples to be analyzed are properly prepared. This will minimize crystal overlap in the SEM image acquisition and guarantee a lower error in the measurements without greater efforts for data handling. All in all, the method developed is a time optimizer with a high measurement value, considering that it is capable of measuring hundreds of graphene oxide crystals in seconds, saving weeks of manual work.Keywords: characterization, graphene oxide, nanomaterials, U-net, deep learning
Procedia PDF Downloads 160346 Experimental and Computational Fluid Dynamic Modeling of a Progressing Cavity Pump Handling Newtonian Fluids
Authors: Deisy Becerra, Edwar Perez, Nicolas Rios, Miguel Asuaje
Abstract:
Progressing Cavity Pump (PCP) is a type of positive displacement pump that is being awarded greater importance as capable artificial lift equipment in the heavy oil field. The most commonly PCP used is driven single lobe pump that consists of a single external helical rotor turning eccentrically inside a double internal helical stator. This type of pump was analyzed by the experimental and Computational Fluid Dynamic (CFD) approach from the DCAB031 model located in a closed-loop arrangement. Experimental measurements were taken to determine the pressure rise and flow rate with a flow control valve installed at the outlet of the pump. The flowrate handled was measured by a FLOMEC-OM025 oval gear flowmeter. For each flowrate considered, the pump’s rotational speed and power input were controlled using an Invertek Optidrive E3 frequency driver. Once a steady-state operation was attained, pressure rise measurements were taken with a Sper Scientific wide range digital pressure meter. In this study, water and three Newtonian oils of different viscosities were tested at different rotational speeds. The CFD model implementation was developed on Star- CCM+ using an Overset Mesh that includes the relative motion between rotor and stator, which is one of the main contributions of the present work. The simulations are capable of providing detailed information about the pressure and velocity fields inside the device in laminar and unsteady regimens. The simulations have a good agreement with the experimental data due to Mean Squared Error (MSE) in under 21%, and the Grid Convergence Index (GCI) was calculated for the validation of the mesh, obtaining a value of 2.5%. In this case, three different rotational speeds were evaluated (200, 300, 400 rpm), and it is possible to show a directly proportional relationship between the rotational speed of the rotor and the flow rate calculated. The maximum production rates for the different speeds for water were 3.8 GPM, 4.3 GPM, and 6.1 GPM; also, for the oil tested were 1.8 GPM, 2.5 GPM, 3.8 GPM, respectively. Likewise, an inversely proportional relationship between the viscosity of the fluid and pump performance was observed, since the viscous oils showed the lowest pressure increase and the lowest volumetric flow pumped, with a degradation around of 30% of the pressure rise, between performance curves. Finally, the Productivity Index (PI) remained approximately constant for the different speeds evaluated; however, between fluids exist a diminution due to the viscosity.Keywords: computational fluid dynamic, CFD, Newtonian fluids, overset mesh, PCP pressure rise
Procedia PDF Downloads 128345 Smart Irrigation System for Applied Irrigation Management in Tomato Seedling Production
Authors: Catariny C. Aleman, Flavio B. Campos, Matheus A. Caliman, Everardo C. Mantovani
Abstract:
The seedling production stage is a critical point in the vegetable production system. Obtaining high-quality seedlings is a prerequisite for subsequent cropping to occur well and productivity optimization is required. The water management is an important step in agriculture production. The adequate water requirement in horticulture seedlings can provide higher quality and increase field production. The practice of irrigation is indispensable and requires a duly adjusted quality irrigation system, together with a specific water management plan to meet the water demand of the crop. Irrigation management in seedling management requires a great deal of specific information, especially when it involves the use of inputs such as hydrorentering polymers and automation technologies of the data acquisition and irrigation system. The experiment was conducted in a greenhouse at the Federal University of Viçosa, Viçosa - MG. Tomato seedlings (Lycopersicon esculentum Mill) were produced in plastic trays of 128 cells, suspended at 1.25 m from the ground. The seedlings were irrigated by 4 micro sprinklers of fixed jet 360º per tray, duly isolated by sideboards, following the methodology developed for this work. During Phase 1, in January / February 2017 (duration of 24 days), the cultivation coefficient (Kc) of seedlings cultured in the presence and absence of hydrogel was evaluated by weighing lysimeter. In Phase 2, September 2017 (duration of 25 days), the seedlings were submitted to 4 irrigation managements (Kc, timer, 0.50 ETo, and 1.00 ETo), in the presence and absence of hydrogel and then evaluated in relation to quality parameters. The microclimate inside the greenhouse was monitored with the use of air temperature, relative humidity and global radiation sensors connected to a microcontroller that performed hourly calculations of reference evapotranspiration by Penman-Monteith standard method FAO56 modified for the balance of long waves according to Walker, Aldrich, Short (1983), and conducted water balance and irrigation decision making for each experimental treatment. Kc of seedlings cultured on a substrate with hydrogel (1.55) was higher than Kc on a pure substrate (1.39). The use of the hydrogel was a differential for the production of earlier tomato seedlings, with higher final height, the larger diameter of the colon, greater accumulation of a dry mass of shoot, a larger area of crown projection and greater the rate of relative growth. The handling 1.00 ETo promoted higher relative growth rate.Keywords: automatic system; efficiency of water use; precision irrigation, micro sprinkler.
Procedia PDF Downloads 116344 Suitable Site Selection of Small Dams Using Geo-Spatial Technique: A Case Study of Dadu Tehsil, Sindh
Authors: Zahid Khalil, Saad Ul Haque, Asif Khan
Abstract:
Decision making about identifying suitable sites for any project by considering different parameters is difficult. Using GIS and Multi-Criteria Analysis (MCA) can make it easy for those projects. This technology has proved to be an efficient and adequate in acquiring the desired information. In this study, GIS and MCA were employed to identify the suitable sites for small dams in Dadu Tehsil, Sindh. The GIS software is used to create all the spatial parameters for the analysis. The parameters that derived are slope, drainage density, rainfall, land use / land cover, soil groups, Curve Number (CN) and runoff index with a spatial resolution of 30m. The data used for deriving above layers include 30-meter resolution SRTM DEM, Landsat 8 imagery, and rainfall from National Centre of Environment Prediction (NCEP) and soil data from World Harmonized Soil Data (WHSD). Land use/Land cover map is derived from Landsat 8 using supervised classification. Slope, drainage network and watershed are delineated by terrain processing of DEM. The Soil Conservation Services (SCS) method is implemented to estimate the surface runoff from the rainfall. Prior to this, SCS-CN grid is developed by integrating the soil and land use/land cover raster. These layers with some technical and ecological constraints are assigned weights on the basis of suitability criteria. The pairwise comparison method, also known as Analytical Hierarchy Process (AHP) is taken into account as MCA for assigning weights on each decision element. All the parameters and group of parameters are integrated using weighted overlay in GIS environment to produce suitable sites for the Dams. The resultant layer is then classified into four classes namely, best suitable, suitable, moderate and less suitable. This study reveals a contribution to decision-making about suitable sites analysis for small dams using geospatial data with minimal amount of ground data. This suitability maps can be helpful for water resource management organizations in determination of feasible rainwater harvesting structures (RWH).Keywords: Remote sensing, GIS, AHP, RWH
Procedia PDF Downloads 389343 Hospital Wastewater Treatment by Ultrafiltration Membrane System
Authors: Selin Top, Raul Marcos, M. Sinan Bilgili
Abstract:
Although there have been several studies related to collection, temporary storage, handling and disposal of solid wastes generated by hospitals, there are only a few studies related to liquid wastes generated by hospitals or hospital wastewaters. There is an important amount of water consumptions in hospitals. While minimum domestic water consumption per person is 100 L/day, water consumption per bed in hospitals is generally ranged between 400-1200 L. This high amount of consumption causes high amount of wastewater. The quantity of wastewater produced in a hospital depends on different factors: bed numbers, hospital age, accessibility to water, general services present inside the structure (kitchen, laundry, laboratory, diagnosis, radiology, and air conditioning), number and type of wards and units, institution management policies and awareness in managing the structure in safeguarding the environment, climate and cultural and geographic factors. In our country, characterization of hospital wastewaters conducted by classical parameters in a very few studies. However, as mentioned above, this type of wastewaters may contain different compounds than domestic wastewaters. Hospital Wastewater (HWW) is wastewater generated from all activities of the hospital, medical and non medical. Nowadays, hospitals are considered as one of the biggest sources of wastewater along with urban sources, agricultural effluents and industrial sources. As a health-care waste, hospital wastewater has the same quality as municipal wastewater, but may also potentially contain various hazardous components due to using disinfectants, pharmaceuticals, radionuclides and solvents making not suitable the connection of hospital wastewater to the municipal sewage network. These characteristics may represent a serious health hazard and children, adults and animals all have the potential to come into contact with this water. Therefore, the treatment of hospital wastewater is an important current interest point to focus on. This paper aims to approach on the investigation of hospital wastewater treatment by membrane systems. This study aim is to determined hospital wastewater’s characterization and also evaluates the efficiency of hospital wastewater treatment by high pressure filtration systems such as ultrafiltration (UF). Hospital wastewater samples were taken directly from sewage system from Şişli Etfal Training and Research Hospital, located in the district of Şişli, in the European part of Istanbul. The hospital is a 784 bed tertiary care center with a daily outpatient department of 3850 patients. Ultrafiltration membrane is used as an experimental treatment and the influence of the pressure exerted on the membranes was examined, ranging from 1 to 3 bar. The permeate flux across the membrane was observed to define the flooding membrane points. The global COD and BOD5 removal efficiencies were 54% and 75% respectively for ultrafiltration, all the SST removal efficiencies were above 90% and a successful removal of the pathological bacteria measured was achieved.Keywords: hospital wastewater, membrane, ultrafiltration, treatment
Procedia PDF Downloads 304342 Astronomical Object Classification
Authors: Alina Muradyan, Lina Babayan, Arsen Nanyan, Gohar Galstyan, Vigen Khachatryan
Abstract:
We present a photometric method for identifying stars, galaxies and quasars in multi-color surveys, which uses a library of ∼> 65000 color templates for comparison with observed objects. The method aims for extracting the information content of object colors in a statistically correct way, and performs a classification as well as a redshift estimation for galaxies and quasars in a unified approach based on the same probability density functions. For the redshift estimation, we employ an advanced version of the Minimum Error Variance estimator which determines the redshift error from the redshift dependent probability density function itself. The method was originally developed for the Calar Alto Deep Imaging Survey (CADIS), but is now used in a wide variety of survey projects. We checked its performance by spectroscopy of CADIS objects, where the method provides high reliability (6 errors among 151 objects with R < 24), especially for the quasar selection, and redshifts accurate within σz ≈ 0.03 for galaxies and σz ≈ 0.1 for quasars. For an optimization of future survey efforts, a few model surveys are compared, which are designed to use the same total amount of telescope time but different sets of broad-band and medium-band filters. Their performance is investigated by Monte-Carlo simulations as well as by analytic evaluation in terms of classification and redshift estimation. If photon noise were the only error source, broad-band surveys and medium-band surveys should perform equally well, as long as they provide the same spectral coverage. In practice, medium-band surveys show superior performance due to their higher tolerance for calibration errors and cosmic variance. Finally, we discuss the relevance of color calibration and derive important conclusions for the issues of library design and choice of filters. The calibration accuracy poses strong constraints on an accurate classification, which are most critical for surveys with few, broad and deeply exposed filters, but less severe for surveys with many, narrow and less deep filters.Keywords: VO, ArVO, DFBS, FITS, image processing, data analysis
Procedia PDF Downloads 78341 Radiation Protection and Licensing for an Experimental Fusion Facility: The Italian and European Approaches
Authors: S. Sandri, G. M. Contessa, C. Poggi
Abstract:
An experimental nuclear fusion device could be seen as a step toward the development of the future nuclear fusion power plant. If compared with other possible solutions to the energy problem, nuclear fusion has advantages that ensure sustainability and security. In particular considering the radioactivity and the radioactive waste produced, in a nuclear fusion plant the component materials could be selected in order to limit the decay period, making it possible the recycling in a new reactor after about 100 years from the beginning of the decommissioning. To achieve this and other pertinent goals many experimental machines have been developed and operated worldwide in the last decades, underlining that radiation protection and workers exposure are critical aspects of these facilities due to the high flux, high energy neutrons produced in the fusion reactions. Direct radiation, material activation, tritium diffusion and other related issues pose a real challenge to the demonstration that these devices are safer than the nuclear fission facilities. In Italy, a limited number of fusion facilities have been constructed and operated since 30 years ago, mainly at the ENEA Frascati Center, and the radiation protection approach, addressed by the national licensing requirements, shows that it is not always easy to respect the constraints for the workers' exposure to ionizing radiation. In the current analysis, the main radiation protection issues encountered in the Italian Fusion facilities are considered and discussed, and the technical and legal requirements are described. The licensing process for these kinds of devices is outlined and compared with that of other European countries. The following aspects are considered throughout the current study: i) description of the installation, plant and systems, ii) suitability of the area, buildings, and structures, iii) radioprotection structures and organization, iv) exposure of personnel, v) accident analysis and relevant radiological consequences, vi) radioactive wastes assessment and management. In conclusion, the analysis points out the needing of a special attention to the radiological exposure of the workers in order to demonstrate at least the same level of safety as that reached at the nuclear fission facilities.Keywords: fusion facilities, high energy neutrons, licensing process, radiation protection
Procedia PDF Downloads 352340 Electret: A Solution of Partial Discharge in High Voltage Applications
Authors: Farhina Haque, Chanyeop Park
Abstract:
The high efficiency, high field, and high power density provided by wide bandgap (WBG) semiconductors and advanced power electronic converter (PEC) topologies enabled the dynamic control of power in medium to high voltage systems. Although WBG semiconductors outperform the conventional Silicon based devices in terms of voltage rating, switching speed, and efficiency, the increased voltage handling properties, high dv/dt, and compact device packaging increase local electric fields, which are the main causes of partial discharge (PD) in the advanced medium and high voltage applications. PD, which occurs actively in voids, triple points, and airgaps, is an inevitable dielectric challenge that causes insulation and device aging. The aging process accelerates over time and eventually leads to the complete failure of the applications. Hence, it is critical to mitigating PD. Sharp edges, airgaps, triple points, and bubbles are common defects that exist in any medium to high voltage device. The defects are created during the manufacturing processes of the devices and are prone to high-electric-field-induced PD due to the low permittivity and low breakdown strength of the gaseous medium filling the defects. A contemporary approach of mitigating PD by neutralizing electric fields in high power density applications is introduced in this study. To neutralize the locally enhanced electric fields that occur around the triple points, airgaps, sharp edges, and bubbles, electrets are developed and incorporated into high voltage applications. Electrets are electric fields emitting dielectric materials that are embedded with electrical charges on the surface and in bulk. In this study, electrets are fabricated by electrically charging polyvinylidene difluoride (PVDF) films based on the widely used triode corona discharge method. To investigate the PD mitigation performance of the fabricated electret films, a series of PD experiments are conducted on both the charged and uncharged PVDF films under square voltage stimuli that represent PWM waveform. In addition to the use of single layer electrets, multiple layers of electrets are also experimented with to mitigate PD caused by higher system voltages. The electret-based approach shows great promise in mitigating PD by neutralizing the local electric field. The results of the PD measurements suggest that the development of an ultimate solution to the decades-long dielectric challenge would be possible with further developments in the fabrication process of electrets.Keywords: electrets, high power density, partial discharge, triode corona discharge
Procedia PDF Downloads 203339 Adaptive Assemblies: A Scalable Solution for Atlanta's Affordable Housing Crisis
Authors: Claudia Aguilar, Amen Farooq
Abstract:
Among other cities in the United States, the city of Atlanta is experiencing levels of growth that surpass anything we have witnessed in the last century. With the surge of population influx, the available housing is practically bursting at the seams. Supply is low, and demand is high. In effect, the average one-bedroom apartment runs for 1,800 dollars per month. The city is desperately seeking new opportunities to provide affordable housing at an expeditious rate. This has been made evident by the recent updates to the city’s zoning. With the recent influx in the housing market, young professionals, in particular millennials, are desperately looking for alternatives to stay within the city. To remedy Atlanta’s affordable housing crisis, the city of Atlanta is planning to introduce 40 thousand of new affordable housing units by 2026. To achieve the urgent need for more affordable housing, the architectural response needs to adapt to overcome this goal. A method that has proven successful in modern housing is to practice modular means of development. A method that has been constrained to the dimensions of the max load for an eighteen-wheeler. This approach has diluted the architect’s ability to produce site-specific, informed design and rather contributes to the “cookie cutter” stigma that the method has been labeled with. This thesis explores the design methodology for modular housing by revisiting its constructability and adaptability. This research focuses on a modular housing type that could break away from the constraints of transport and deliver adaptive reconfigurable assemblies. The adaptive assemblies represent an integrated design strategy for assembling the future of affordable dwelling units. The goal is to take advantage of a component-based system and explore a scalable solution to modular housing. This proposal aims specifically to design a kit of parts that are made to be easily transported and assembled but also gives the ability to customize the use of components to benefit all unique conditions. The benefits of this concept could include decreased construction time, cost, on-site labor, and disruption while providing quality housing with affordable and flexible options.Keywords: adaptive assemblies, modular architecture, adaptability, constructibility, kit of parts
Procedia PDF Downloads 85338 Enhancement of Cross-Linguistic Effect with the Increase in the Multilingual Proficiency during Early Childhood: A Case Study of English Language Acquisition by a Pre-School Child
Authors: Anupama Purohit
Abstract:
The paper is a study on the inevitable cross-linguistic effect found in the early multilingual learners. The cross-linguistic behaviour like code-mixing, code-switching, foreign accent, literal translation, redundancy and syntactic manipulation effected due to other languages on the English language output of a non-native pre-school child are discussed here. A case study method is adopted in this paper to support the claim of the title. A simultaneously tetra lingual pre-school child’s (within 1;3 to 4;0) language behaviour is analysed here. The sample output data of the child is gathered from the diary entries maintained by her family, regular observations and video recordings done since her birth. She is getting the input of her mother tongue, Sambalpuri, from her grandparents only; Hindi, the local language from her play-school and the neighbourhood; English only from her mother and occasional visit of other family friends; Odia only during the reading of the Odia story book. The child is exposed to code-mixing of all the languages throughout her childhood. But code-mixing, literal translation, redundancy and duplication were absent in her initial stage of multilingual acquisition. As the child was more proficient in English in comparison to her other first languages and had never heard code-mixing in English language; it was expected from her input pattern of English (one parent, English language) that she would maintain purity in her use of English while talking to the English language interlocutor. But with gradual increase in the language proficiency in each of the languages of the child, her handling of the multiple codes becomes deft cross-linguistically. It can be deduced from the case study that after attaining certain milestone proficiency in each language, the child’s linguistic faculty can operate at a metalinguistic level. The functional use of each morpheme, their arrangement in words and in the sentences, the supra segmental features, lexical-semantic mapping, culture specific use of a language and the pragmatic skills converge to give a typical childlike multilingual output in an intelligible manner to the multilingual people (with the same set of languages in combination). The result is appealing because for expressing the same ideas which the child used to speak (may be with grammatically wrong expressions) in one language, gradually, she starts showing cross-linguistic effect in her expressions. So the paper pleads for the separatist view from the very beginning of the holophrastic phase (as the child expresses in addressee-specific language); but development of a metalinguistic ability that helps the child in communicating in a sophisticated way according to the linguistic status of the addressee is unique to the multilingual child. This metalinguistic ability is independent of the mode if input of a multilingual child.Keywords: code-mixing, cross-linguistic effect, early multilingualism, literal translation
Procedia PDF Downloads 299337 Solar Panel Design Aspects and Challenges for a Lunar Mission
Authors: Mannika Garg, N. Srinivas Murthy, Sunish Nair
Abstract:
TeamIndus is only Indian team participated in the Google Lunar X Prize (GLXP). GLXP is an incentive prize space competition which is organized by the XPrize Foundation and sponsored by Google. The main objective of the mission is to soft land a rover on the moon surface, travel minimum displacement of 500 meters and transmit HD and NRT videos and images to the Earth. Team Indus is designing a Lunar Lander which carries Rover with it and deliver onto the surface of the moon with a soft landing. For lander to survive throughout the mission, energy is required to operate all attitude control sensors, actuators, heaters and other necessary components. Photovoltaic solar array systems are the most common and primary source of power generation for any spacecraft. The scope of this paper is to provide a system-level approach for designing the solar array systems of the lander to generate required power to accomplish the mission. For this mission, the direction of design effort is to higher efficiency, high reliability and high specific power. Towards this approach, highly efficient multi-junction cells have been considered. The design is influenced by other constraints also like; mission profile, chosen spacecraft attitude, overall lander configuration, cost effectiveness and sizing requirements. This paper also addresses the various solar array design challenges such as operating temperature, shadowing, radiation environment and mission life and strategy of supporting required power levels (peak and average). The challenge to generate sufficient power at the time of surface touchdown, due to low sun elevation (El) and azimuth (Az) angle which depends on Lunar landing site, has also been showcased in this paper. To achieve this goal, energy balance analysis has been carried out to study the impact of the above-mentioned factors and to meet the requirements and has been discussed in this paper.Keywords: energy balance analysis, multi junction solar cells, photovoltaic, reliability, spacecraft attitude
Procedia PDF Downloads 230336 Application of Thermal Dimensioning Tools to Consider Different Strategies for the Disposal of High-Heat-Generating Waste
Authors: David Holton, Michelle Dickinson, Giovanni Carta
Abstract:
The principle of geological disposal is to isolate higher-activity radioactive wastes deep inside a suitable rock formation to ensure that no harmful quantities of radioactivity reach the surface environment. To achieve this, wastes will be placed in an engineered underground containment facility – the geological disposal facility (GDF) – which will be designed so that natural and man-made barriers work together to minimise the escape of radioactivity. Internationally, various multi-barrier concepts have been developed for the disposal of higher-activity radioactive wastes. High-heat-generating wastes (HLW, spent fuel and Pu) provide a number of different technical challenges to those associated with the disposal of low-heat-generating waste. Thermal management of the disposal system must be taken into consideration in GDF design; temperature constraints might apply to the wasteform, container, buffer and host rock. Of these, the temperature limit placed on the buffer component of the engineered barrier system (EBS) can be the most constraining factor. The heat must therefore be managed such that the properties of the buffer are not compromised to the extent that it cannot deliver the required level of safety. The maximum temperature of a buffer surrounding a container at the centre of a fixed array of heat-generating sources, arises due to heat diffusing from neighbouring heat-generating wastes, incrementally contributing to the temperature of the EBS. A range of strategies can be employed for managing heat in a GDF, including the spatial arrangements or patterns of those containers; different geometrical configurations can influence the overall thermal density in a disposal facility (or area within a facility) and therefore the maximum buffer temperature. A semi-analytical thermal dimensioning tool and methodology have been applied at a generic stage to explore a range of strategies to manage the disposal of high-heat-generating waste. A number of examples, including different geometrical layouts and chequer-boarding, have been illustrated to demonstrate how these tools can be used to consider safety margins and inform strategic disposal options when faced with uncertainty, at a generic stage of the development of a GDF.Keywords: buffer, geological disposal facility, high-heat-generating waste, spent fuel
Procedia PDF Downloads 285335 Digital Transformation of Lean Production: Systematic Approach for the Determination of Digitally Pervasive Value Chains
Authors: Peter Burggräf, Matthias Dannapfel, Hanno Voet, Patrick-Benjamin Bök, Jérôme Uelpenich, Julian Hoppe
Abstract:
The increasing digitalization of value chains can help companies to handle rising complexity in their processes and thereby reduce the steadily increasing planning and control effort in order to raise performance limits. Due to technological advances, companies face the challenge of smart value chains for the purpose of improvements in productivity, handling the increasing time and cost pressure and the need of individualized production. Therefore, companies need to ensure quick and flexible decisions to create self-optimizing processes and, consequently, to make their production more efficient. Lean production, as the most commonly used paradigm for complexity reduction, reaches its limits when it comes to variant flexible production and constantly changing market and environmental conditions. To lift performance limits, which are inbuilt in current value chains, new methods and tools must be applied. Digitalization provides the potential to derive these new methods and tools. However, companies lack the experience to harmonize different digital technologies. There is no practicable framework, which instructs the transformation of current value chains into digital pervasive value chains. Current research shows that a connection between lean production and digitalization exists. This link is based on factors such as people, technology and organization. In this paper, the introduced method for the determination of digitally pervasive value chains takes the factors people, technology and organization into account and extends existing approaches by a new dimension. It is the first systematic approach for the digital transformation of lean production and consists of four steps: The first step of ‘target definition’ describes the target situation and defines the depth of the analysis with regards to the inspection area and the level of detail. The second step of ‘analysis of the value chain’ verifies the lean-ability of processes and lies in a special focus on the integration capacity of digital technologies in order to raise the limits of lean production. Furthermore, the ‘digital evaluation process’ ensures the usefulness of digital adaptions regarding their practicability and their integrability into the existing production system. Finally, the method defines actions to be performed based on the evaluation process and in accordance with the target situation. As a result, the validation and optimization of the proposed method in a German company from the electronics industry shows that the digital transformation of current value chains based on lean production achieves a raise of their inbuilt performance limits.Keywords: digitalization, digital transformation, Industrie 4.0, lean production, value chain
Procedia PDF Downloads 313334 Is Sodium Channel Nav1.7 an Ideal Therapeutically Analgesic Target? A Systematic Review
Authors: Yutong Wan, John N. Wood
Abstract:
Introduction: SCN9A encoded Nav1.7 is an ideal therapeutic target with minimal side effects for the pharmaceutical industry because SCN9A variants can cause both human gains of function pain-related mutations and loss of function pain-free mutations. This study reviews the clinical effectiveness of existing Nav1.7 inhibitors, which theoretically should be powerful analgesics. Methods: A systematic review is conducted on the effectiveness of current Nav1.7 blockers undergoing clinical trials. Studies were mainly extracted from PubMed, U.S. National Library of Medicine Clinical Trials, World Health Organization International Clinical Trials Registry, ISRCTN registry platform, and Integrated Research Approval System by NHS. Only studies with full text available and those conducted using double-blinded, placebo controlled, and randomised designs and reporting at least one analgesic measurement were included. Results: Overall, 61 trials were screened, and eight studies covering PF 05089771 (Pfizer), TV 45070 (Teva & Xenon), and BIIB074 (Biogen) met the inclusion criteria. Most studies were excluded because results were not published. All three compounds demonstrated insignificant analgesic effects, and the comparison between PF 05089771 and pregabalin/ibuprofen showed that PF 05089771 was a much weaker analgesic. All three drug candidates only have mild side effects, indicating the potentials for further investigation of Nav1.7 antagonists. Discussion: The failure of current Nav1.7 small molecule inhibitors might attribute to ignorance of the key role of endogenous systems in Nav1.7 null mutants, the lack of selectivity and blocking potency, and central impermeability. The synergistic combination of analgesic drugs, a recent UCL patent, combining a small dose of Nav1.7 blockers and opioids or enkephalinase inhibitors dramatically enhanced the analgesic effects. Conclusion: The current clinical testing Nav1.7 blockers are generally disappointing. However, the newer generation of Nav1.7 targeting analgesics has overcome the major constraints of its predecessors.Keywords: chronic pain, Nav1.7 blockers, SCN9A, systematic review
Procedia PDF Downloads 131333 Exploring the History of Chinese Music Acoustic Technology through Data Fluctuations
Abstract:
The study of extant musical sites can provide a side-by-side picture of historical ethnomusicological information. In their data collection on Chinese opera houses, researchers found that one Ming Dynasty opera house reached a width of nearly 18 meters, while all opera houses of the same period and after it was far from such a width, being significantly smaller than 18 meters. The historical transient fluctuations in the data dimension of width that caused Chinese theatres to fluctuate in the absence of construction scale constraints have piqued the interest of researchers as to why there is data variation in width. What factors have contributed to the lack of further expansion in the width of theatres? To address this question, this study used a comparative approach to conduct a venue experiment between this theater stage and another theater stage for non-heritage opera performances, collecting the subjective perceptions of performers and audiences at different theater stages, as well as combining BK Connect platform software to measure data such as echo and delay. From the subjective and objective results, it is inferred that the Chinese ancients discovered and understood the acoustical phenomenon of the Haas effect by exploring the effect of stage width on musical performance and appreciation of listening states during the Ming Dynasty and utilized this discovery to serve music in subsequent stage construction. This discovery marked a node of evolution in Chinese architectural acoustics technology driven by musical demands. It is also instructive to note that, in contrast to many of the world's "unsuccessful civilizations," China can use a combination of heritage and intangible cultural research to chart a clear, demand-driven course for the evolution of human music technology, and that the findings of such research will complete the course of human exploration of music acoustics. The findings of such research will complete the journey of human exploration of music acoustics, and this practical experience can be applied to the exploration and understanding of other musical heritage base data.Keywords: Haas effect, musical acoustics, history of acoustical technology, Chinese opera stage, structure
Procedia PDF Downloads 184332 Hansen Solubility Parameters, Quality by Design Tool for Developing Green Nanoemulsion to Eliminate Sulfamethoxazole from Contaminated Water
Authors: Afzal Hussain, Mohammad A. Altamimi, Syed Sarim Imam, Mudassar Shahid, Osamah Abdulrahman Alnemer
Abstract:
Exhaustive application of sulfamethoxazole (SUX) became as a global threat for human health due to water contamination through diverse sources. The addressed combined application of Hansen solubility (HSPiP software) parameters and Quality by Design tool for developing various green nanoemulsions. HSPiP program assisted to screen suitable excipients based on Hansen solubility parameters and experimental solubility data. Various green nanoemulsions were prepared and characterized for globular size, size distribution, zeta potential, and removal efficiency. Design Expert (DoE) software further helped to identify critical factors responsible to have direct impact on percent removal efficiency, size, and viscosity. Morphological investigation was visualized under transmission electron microscopy (TEM). Finally, the treated was studied to negate the presence of the tested drug employing ICP-OES (inductively coupled plasma optical emission microscopy) technique and HPLC (high performance liquid chromatography). Results showed that HSPiP predicted biocompatible lipid, safe surfactant (lecithin), and propylene glycol (PG). Experimental solubility of the drug in the predicted excipients were quite convincing and vindicated. Various green nanoemulsions were fabricated, and these were evaluated for in vitro findings. Globular size (100-300 nm), PDI (0.1-0.5), zeta potential (~ 25 mV), and removal efficiency (%RE = 70-98%) were found to be in acceptable range for deciding input factors with level in DoE. Experimental design tool assisted to identify the most critical variables controlling %RE and optimized content of nanoemulsion under set constraints. Dispersion time was varied from 5-30 min. Finally, ICP-OES and HPLC techniques corroborated the absence of SUX in the treated water. Thus, the strategy is simple, economic, selective, and efficient.Keywords: quality by design, sulfamethoxazole, green nanoemulsion, water treatment, icp-oes, hansen program (hspip software
Procedia PDF Downloads 82331 Densities and Volumetric Properties of {Difurylmethane + [(C5 – C8) N-Alkane or an Amide]} Binary Systems at 293.15, 298.15 and 303.15 K: Modelling Excess Molar Volumes by Prigogine-Flory-Patterson Theory
Authors: Belcher Fulele, W. A. A. Ddamba
Abstract:
Study of solvent systems contributes to the understanding of intermolecular interactions that occur in binary mixtures. These interactions involves among others strong dipole-dipole interactions and weak van de Waals interactions which are of significant application in pharmaceuticals, solvent extractions, design of reactors and solvent handling and storage processes. Binary mixtures of solvents can thus be used as a model to interpret thermodynamic behavior that occur in a real solution mixture. Densities of pure DFM, n-alkanes (n-pentane, n-hexane, n-heptane and n-octane) and amides (N-methylformamide, N-ethylformamide, N,N-dimethylformamide and N,N-dimethylacetamide) as well as their [DFM + ((C5-C8) n-alkane or amide)] binary mixtures over the entire composition range, have been reported at temperature 293.15, 298.15 and 303.15 K and atmospheric pressure. These data has been used to derive the thermodynamic properties: the excess molar volume of solution, apparent molar volumes, excess partial molar volumes, limiting excess partial molar volumes, limiting partial molar volumes of each component of a binary mixture. The results are discussed in terms of possible intermolecular interactions and structural effects that occur in the binary mixtures. The variation of excess molar volume with DFM composition for the [DFM + (C5-C7) n-alkane] binary mixture exhibit a sigmoidal behavior while for the [DFM + n-octane] binary system, positive deviation of excess molar volume function was observed over the entire composition range. For each of the [DFM + (C5-C8) n-alkane] binary mixture, the excess molar volume exhibited a fall with increase in temperature. The excess molar volume for each of [DFM + (NMF or NEF or DMF or DMA)] binary system was negative over the entire DFM composition at each of the three temperatures investigated. The negative deviations in excess molar volume values follow the order: DMA > DMF > NEF > NMF. Increase in temperature has a greater effect on component self-association than it has on complex formation between molecules of components in [DFM + (NMF or NEF or DMF or DMA)] binary mixture which shifts complex formation equilibrium towards complex to give a drop in excess molar volume with increase in temperature. The Prigogine-Flory-Patterson model has been applied at 298.15 K and reveals that the free volume is the most important contributing term to the excess experimental molar volume data for [DFM + (n-pentane or n-octane)] binary system. For [DFM + (NMF or DMF or DMA)] binary mixture, the interactional term and characteristic pressure term contributions are the most important contributing terms in describing the sign of experimental excess molar volume. The mixture systems contributed to the understanding of interactions of polar solvents with proteins (amides) with non-polar solvents (alkanes) in biological systems.Keywords: alkanes, amides, excess thermodynamic parameters, Prigogine-Flory-Patterson model
Procedia PDF Downloads 355330 Asylum Seekers' Legal Limbo under the Migrant Protection Protocols: Implications from a US-Mexico Border Project
Authors: Tania M. Guerrero, Ileana Cortes Santiago
Abstract:
Estamos Unidos Asylum Project has served more than 2,000 asylum seekers and migrants who are under the Migrant Protection Protocols (MPP) policy in Ciudad Juarez, Mexico. The U.S. policy, implemented in January 2019, has stripped asylum seekers of their rights—forcing people fleeing violence and discrimination to wait in similar or worse conditions from which they fled and navigate their entire asylum process in a different country. Several civil rights groups, including the American Civil Liberties Union (ACLU), challenged MPP in U.S. federal courts in February 2019, arguing a violation of international U.S. obligations towards refugees and asylum-seekers under the 1951 Refugee Convention and the Refugee Act of 1980 in regards to the non-refoulement principle. MPP has influenced Mexico's policies, enforcement, and prioritization of the presence of asylum seekers and migrants; it has also altered the way international non-governmental organizations work at the Mexican Northern border. Estamos Unidos is a project situated in a logistical conundrum, as it provides needed legal services to a population in a legal and humanitarian void, i.e., a liminal space. The liminal space occupied by asylum seekers living under MPP is one that, in today's world, should not be overlooked; it dilutes asylum law and U.S. commitments to international protections. This paper provides analysis of and broader implications from a project whose main goal is to uphold the protections of asylum seekers and international refugee law. The authors identified and analyzed four critical points based on field work conducted since August 2019: (1) strategic coalition building with international, local, and national organizations; (2) brokering between domestic and international contexts and critical legal constraints; (3) flexibility to sudden policy changes and the diverse needs of the multiethnic groups of migrants and asylum seekers served by the project; and (4) the complexity of providing legal assistance to asylum seekers who are survivors of trauma. The authors concur with scholarship when highlighting the erosion of protections of asylum seekers and migrants as a dangerous and unjust global phenomenon.Keywords: asylum, human rights, migrant protection protocols, refugees law
Procedia PDF Downloads 133329 Adaptive Strategies to Nutrient Deficiency of Doubled Diploid Citrumelo 4475: A Prospective Study Based on Structural, Ultrastructural, Physiological and Biochemical Parameters
Authors: J. Oustric, L. Berti, J. Santini
Abstract:
Nowadays, the objective of durable agriculture, and in particular organic agriculture, is to reduce the level of fertilizer inputs used in crops. Limiting the quantity of fertilizer inputs would optimize the economical result and minimizing the environmental impact. Nutrient deficiency, particularly of a major nutrient (N, P, and K), can seriously affect fruit production and quality. In citrus crops, rootstock/scion combinations. In citrus crop, scion/rootstock combinations are used frequently to improve tolerance to various abiotic stresses. New rootstocks are needed to respond to these constraints, and the use of new tetraploid rootstocks better adapted to lower nutrient intake could offer a promising way forward. The aim of this work was to determine whether a better tolerance to nutrient deficiency could be observed in a doubled diploid seedling and whether this tolerance could be observed in common clementine scion if used as rootstocks. We selected diploid (CM2x) and doubled diploid (CM4x) Citrumelo 4475 seedlings and common clementine (C) grafted onto Citrumelo 4475 diploid (C/CM2x) and doubled diploid (C/CM4x) rootstocks. Nutrient deficiency effects on the seedlings and scion/rootstock combinations were analyzed by studying anatomical, structural and ultrastructural determinants (chlorosis, stomata, ostiole and cells and their organelles), photosynthetic properties (leaf net photosynthetic rate (Pₙₑₜ), stomatal conductance (gₛ), chlorophyll a fluorescence (Fᵥ/Fₘ)) and oxidative marker (malondialdehyde). Nutrient deficiency affected differently foliar tissues, physiological parameters, and oxidative metabolism in leaves of seedlings depending on their ploidy level and of common clementine scion depending on their rootstocks ploidy level. Both CM4x and C/CM4x presented lower foliar damages (chlorosis, chloroplasts, mitochondria, and plastoglobuli), photosynthesis processes alteration (Pₙₑₜ, gₛ, and Fᵥ/Fₘ), and malondialdehyde accumulation than CM2x and C/CM2x after nutrient deficiency. Doubled diploid Citrumelo 4475 can improve nutrient deficiency tolerance, and its use as a rootstock allows to confer this tolerance to the common clementine scion.Keywords: nutrient deficiency, oxidative stress, photosynthesis, polyploid rootstocks
Procedia PDF Downloads 128328 Brazilian Public Security: Governability and Constitutional Change
Authors: Gabriel Dolabella, Henrique Rangel, Stella Araújo, Carlos Bolonha, Igor de Lazari
Abstract:
Public security is a common subject on the Brazilian political agenda. The seventh largest economy in the world has high crime and insecurity rates. Specialists try to explain this social picture based on poverty, inequality or public policies addressed to drug trafficking. This excerpt approaches State measures to handle that picture. Therefore, the public security - law enforcement institutions - is at the core of this paper, particularly the relationship among federal and state law enforcement agencies, mainly ruled by a system of urgency. The problems are informal changes on law enforcement management and public opinion collaboration to these changes. Whenever there were huge international events, Brazilian armed forces occupied streets to assure law enforcement - ensuring the order. This logic, considered in the long time, could impact the federal structure of the country. The post-madisonian theorists verify that urgency is often associated to delegation of powers, which is true for Brazilian law enforcement, but here there is a different delegation: States continuously delegate law enforcement powers to the federal government throughout the use of Armed Forces. Therefore, the hypothesis is: Brazil is under a political process of federalization of public security. The political framework addressed here can be explained by the disrespect of legal constraints and the failure of rule of law theoretical models. The methodology of analysis is based on general criteria. Temporally, this study investigates events from 2003, when discussions about the disarmament statute begun. Geographically, this study is limited to Brazilian borders. Materially, the analysis result from the observation of legal resources and political resources (pronouncements of government officials). The main parameters are based on post-madisonianism and federalization of public security can be assessed through credibility and popularity that allow evaluation of this political process of constitutional change. The objective is to demonstrate how the Military Forces are used in public security, not as a random fact or an isolated political event, in order to understand the political motivations and effects that stem from that use from an institutional perspective.Keywords: public security, governability, rule of law, federalism
Procedia PDF Downloads 677327 Sorghum Resilience and Sustainability under Limiting and Non-limiting Conditions of Water and Nitrogen
Authors: Muhammad Tanveer Altaf, Mehmet Bedir, Waqas Liaqat, Gönül Cömertpay, Volkan Çatalkaya, Celaluddin Barutçular, Nergiz Çoban, Ibrahim Cerit, Muhammad Azhar Nadeem, Tolga Karaköy, Faheem Shehzad Baloch
Abstract:
Food production needs to be almost double by 2050 in order to feed around 9 billion people around the Globe. Plant production mostly relies on fertilizers, which also have one of the main roles in environmental pollution. In addition to this, climatic conditions are unpredictable, and the earth is expected to face severe drought conditions in the future. Therefore, water and fertilizers, especially nitrogen are considered as main constraints for future food security. To face these challenges, developing integrative approaches for germplasm characterization and selecting the resilient genotypes performing under limiting conditions is very crucial for effective breeding to meet the food requirement under climatic change scenarios. This study is part of the European Research Area Network (ERANET) project for the characterization of the diversity panel of 172 sorghum accessions and six hybrids as control cultivars under limiting (+N/-H2O, -N/+H2O) and non-limiting conditions (+N+H2O). This study was planned to characterize the sorghum diversity in relation to resource Use Efficiency (RUE), with special attention on harnessing the interaction between genotype and environment (GxE) from a physiological and agronomic perspective. Experiments were conducted at Adana, a Mediterranean climate, with augmented design, and data on various agronomic and physiological parameters were recorded. Plentiful diversity was observed in the sorghum diversity panel and significant variations were seen among the limiting water and nitrogen conditions in comparison with the control experiment. Potential genotypes with the best performance are identified under limiting conditions. Whole genome resequencing was performed for whole germplasm under investigation for diversity analysis. GWAS analysis will be performed using genotypic and phenotypic data and linked markers will be identified. The results of this study will show the adaptation and improvement of sorghum under climate change conditions for future food security.Keywords: germplasm, sorghum, drought, nitrogen, resources use efficiency, sequencing
Procedia PDF Downloads 77