Search results for: input dealers
319 Nondestructive Electrochemical Testing Method for Prestressed Concrete Structures
Authors: Tomoko Fukuyama, Osamu Senbu
Abstract:
Prestressed concrete is used a lot in infrastructures such as roads or bridges. However, poor grout filling and PC steel corrosion are currently major issues of prestressed concrete structures. One of the problems with nondestructive corrosion detection of PC steel is a plastic pipe which covers PC steel. The insulative property of pipe makes a nondestructive diagnosis difficult; therefore a practical technology to detect these defects is necessary for the maintenance of infrastructures. The goal of the research is a development of an electrochemical technique which enables to detect internal defects from the surface of prestressed concrete nondestructively. Ideally, the measurements should be conducted from the surface of structural members to diagnose non-destructively. In the present experiment, a prestressed concrete member is simplified as a layered specimen to simulate a current path between an input and an output electrode on a member surface. The specimens which are layered by mortar and the prestressed concrete constitution materials (steel, polyethylene, stainless steel, or galvanized steel plates) were provided to the alternating current impedance measurement. The magnitude of an applied electric field was 0.01-volt or 1-volt, and the frequency range was from 106 Hz to 10-2 Hz. The frequency spectrums of impedance, which relate to charge reactions activated by an electric field, were measured to clarify the effects of the material configurations or the properties. In the civil engineering field, the Nyquist diagram is popular to analyze impedance and it is a good way to grasp electric relaxation using a shape of the plot. However, it is slightly not suitable to figure out an influence of a measurement frequency which is reciprocal of reaction time. Hence, Bode diagram is also applied to describe charge reactions in the present paper. From the experiment results, the alternating current impedance method looks to be applicable to the insulative material measurement and eventually prestressed concrete diagnosis. At the same time, the frequency spectrums of impedance show the difference of the material configuration. This is because the charge mobility reflects the variety of substances and also the measuring frequency of the electric field determines migration length of charges which are under the influence of the electric field. However, it could not distinguish the differences of the material thickness and is inferred the difficulties of prestressed concrete diagnosis to identify the amount of an air void or a layer of corrosion product by the technique.Keywords: capacitance, conductance, prestressed concrete, susceptance
Procedia PDF Downloads 413318 A Prediction Method of Pollutants Distribution Pattern: Flare Motion Using Computational Fluid Dynamics (CFD) Fluent Model with Weather Research Forecast Input Model during Transition Season
Authors: Benedictus Asriparusa, Lathifah Al Hakimi, Aulia Husada
Abstract:
A large amount of energy is being wasted by the release of natural gas associated with the oil industry. This release interrupts the environment particularly atmosphere layer condition globally which contributes to global warming impact. This research presents an overview of the methods employed by researchers in PT. Chevron Pacific Indonesia in the Minas area to determine a new prediction method of measuring and reducing gas flaring and its emission. The method emphasizes advanced research which involved analytical studies, numerical studies, modeling, and computer simulations, amongst other techniques. A flaring system is the controlled burning of natural gas in the course of routine oil and gas production operations. This burning occurs at the end of a flare stack or boom. The combustion process releases emissions of greenhouse gases such as NO2, CO2, SO2, etc. This condition will affect the chemical composition of air and environment around the boundary layer mainly during transition season. Transition season in Indonesia is absolutely very difficult condition to predict its pattern caused by the difference of two air mass conditions. This paper research focused on transition season in 2013. A simulation to create the new pattern of the pollutants distribution is needed. This paper has outlines trends in gas flaring modeling and current developments to predict the dominant variables in the pollutants distribution. A Fluent model is used to simulate the distribution of pollutants gas coming out of the stack, whereas WRF model output is used to overcome the limitations of the analysis of meteorological data and atmospheric conditions in the study area. Based on the running model, the most influence factor was wind speed. The goal of the simulation is to predict the new pattern based on the time of fastest wind and slowest wind occurs for pollutants distribution. According to the simulation results, it can be seen that the fastest wind (last of March) moves pollutants in a horizontal direction and the slowest wind (middle of May) moves pollutants vertically. Besides, the design of flare stack in compliance according to EPA Oil and Gas Facility Stack Parameters likely shows pollutants concentration remains on the under threshold NAAQS (National Ambient Air Quality Standards).Keywords: flare motion, new prediction, pollutants distribution, transition season, WRF model
Procedia PDF Downloads 556317 A Framework of Virtualized Software Controller for Smart Manufacturing
Authors: Pin Xiu Chen, Shang Liang Chen
Abstract:
A virtualized software controller is developed in this research to replace traditional hardware control units. This virtualized software controller transfers motion interpolation calculations from the motion control units of end devices to edge computing platforms, thereby reducing the end devices' computational load and hardware requirements and making maintenance and updates easier. The study also applies the concept of microservices, dividing the control system into several small functional modules and then deploy into a cloud data server. This reduces the interdependency among modules and enhances the overall system's flexibility and scalability. Finally, with containerization technology, the system can be deployed and started in a matter of seconds, which is more efficient than traditional virtual machine deployment methods. Furthermore, this virtualized software controller communicates with end control devices via wireless networks, making the placement of production equipment or the redesign of processes more flexible and no longer limited by physical wiring. To handle the large data flow and maintain low-latency transmission, this study integrates 5G technology, fully utilizing its high speed, wide bandwidth, and low latency features to achieve rapid and stable remote machine control. An experimental setup is designed to verify the feasibility and test the performance of this framework. This study designs a smart manufacturing site with a 5G communication architecture, serving as a field for experimental data collection and performance testing. The smart manufacturing site includes one robotic arm, three Computer Numerical Control machine tools, several Input/Output ports, and an edge computing architecture. All machinery information is uploaded to edge computing servers and cloud servers via 5G communication and the Internet of Things framework. After analysis and computation, this information is converted into motion control commands, which are transmitted back to the relevant machinery for motion control through 5G communication. The communication time intervals at each stage are calculated using the C++ chrono library to measure the time difference for each command transmission. The relevant test results will be organized and displayed in the full-text.Keywords: 5G, MEC, microservices, virtualized software controller, smart manufacturing
Procedia PDF Downloads 82316 The Role of the Child's Previous Inventory in Verb Overgeneralization in Spanish Child Language: A Case Study
Authors: Mary Rosa Espinosa-Ochoa
Abstract:
The study of overgeneralization in inflectional morphology provides evidence for understanding how a child's mind works when applying linguistic patterns in a novel way. High-frequency inflectional forms in the input cause inappropriate use in contexts related to lower-frequency forms. Children learn verbs as lexical items and new forms develop only gradually, around their second year: most of the utterances that children produce are closely related to what they have previously produced. Spanish has a complex verbal system that inflects for person, mood, and tense. Approximately 200 verbs are irregular, and bare roots always require an inflected form, which represents a challenge for the memory. The aim of this research is to investigate i) what kinds of overgeneralization errors children make in verb production, ii) to what extent these errors are related to verb forms previously produced, and iii) whether the overgeneralized verb components are also frequent in children’s linguistic inventory. It consists of a high-density longitudinal study of a middle-class girl (1;11,24-2;02,24) from Mexico City, whose utterances were recorded almost daily for three months to compile a unique corpus in the Spanish language. Of the 358 types of inflected verbs produced by the child, 9.11% are overgeneralizations. Not only are inflected forms (verbal and pronominal clitics) overgeneralized, but also verbal roots. Each of the forms can be traced to previous utterances, and they show that the child is detecting morphological patterns. Neither verbal roots nor inflected forms are associated with high frequency patterns in her own speech. For example, the child alternates the bare roots of an irregular verb, cáye-te* and cáiga-te* (“fall down”), to express the imperative of the verb cá-e-te (fall down.IMPERATIVE-PRONOMINAL.CLITIC), although cay-ó (PAST.PERF.3SG) is the most frequent form of her previous complete inventory, and the combined frequency of caer (INF), cae (PRES.INDICATIVE.3SG), and caes (PRES.INDICATIVE.2SG) is the same as that of as caiga (PRES.SUBJ.1SG and 3SG). These results provide evidence that a) two forms of the same verb compete in the child’s memory, and b) although the child uses her own inventory to create new forms, these forms are not necessarily frequent in her memory storage, which means that her mind is more sensitive to external stimuli. Language acquisition is a developing process, given the sensitivity of the human mind to linguistic interaction with the outside world.Keywords: inflection, morphology, child language acquisition, Spanish
Procedia PDF Downloads 101315 CyberSteer: Cyber-Human Approach for Safely Shaping Autonomous Robotic Behavior to Comply with Human Intention
Authors: Vinicius G. Goecks, Gregory M. Gremillion, William D. Nothwang
Abstract:
Modern approaches to train intelligent agents rely on prolonged training sessions, high amounts of input data, and multiple interactions with the environment. This restricts the application of these learning algorithms in robotics and real-world applications, in which there is low tolerance to inadequate actions, interactions are expensive, and real-time processing and action are required. This paper addresses this issue introducing CyberSteer, a novel approach to efficiently design intrinsic reward functions based on human intention to guide deep reinforcement learning agents with no environment-dependent rewards. CyberSteer uses non-expert human operators for initial demonstration of a given task or desired behavior. The trajectories collected are used to train a behavior cloning deep neural network that asynchronously runs in the background and suggests actions to the deep reinforcement learning module. An intrinsic reward is computed based on the similarity between actions suggested and taken by the deep reinforcement learning algorithm commanding the agent. This intrinsic reward can also be reshaped through additional human demonstration or critique. This approach removes the need for environment-dependent or hand-engineered rewards while still being able to safely shape the behavior of autonomous robotic agents, in this case, based on human intention. CyberSteer is tested in a high-fidelity unmanned aerial vehicle simulation environment, the Microsoft AirSim. The simulated aerial robot performs collision avoidance through a clustered forest environment using forward-looking depth sensing and roll, pitch, and yaw references angle commands to the flight controller. This approach shows that the behavior of robotic systems can be shaped in a reduced amount of time when guided by a non-expert human, who is only aware of the high-level goals of the task. Decreasing the amount of training time required and increasing safety during training maneuvers will allow for faster deployment of intelligent robotic agents in dynamic real-world applications.Keywords: human-robot interaction, intelligent robots, robot learning, semisupervised learning, unmanned aerial vehicles
Procedia PDF Downloads 259314 Predicting Costs in Construction Projects with Machine Learning: A Detailed Study Based on Activity-Level Data
Authors: Soheila Sadeghi
Abstract:
Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.Keywords: cost prediction, machine learning, project management, random forest, neural networks
Procedia PDF Downloads 54313 Developing a Sustainable System to Deliver Early Intervention for Emotional Health through Australian Schools
Authors: Rebecca-Lee Kuhnert, Ron Rapee
Abstract:
Up to 15% of Australian youth will experience an emotional disorder, yet relatively few get the help they need. Schools provide an ideal environment through which we can identify young people who are struggling and provide them with appropriate help. Universal mental health screening is a method by which all young people in school can be quickly assessed for emotional disorders, after which identified youth can be linked to appropriate health services. Despite the obvious logic of this process, universal mental health screening has received little scientific evaluation and even less application in Australian schools. This study will develop methods for Australian education systems to help identify young people (aged 9-17 years old) who are struggling with existing and emerging emotional disorders. Prior to testing, a series of focus groups will be run to get feedback and input from young people, parents, teachers, and mental health professionals. They will be asked about their thoughts on school-based screening methods and and how to best help students at risk of emotional distress. Schools (n=91) across New South Wales, Australia will be randomised to do either immediate screening (in May 2021) or delayed screening (in February 2022). Students in immediate screening schools will complete a long online mental health screener consisting of standard emotional health questionnaires. Ultimately, this large set of items will be reduced to a small number of items to form the final brief screener. Students who score in the “at-risk” range on any measure of emotional health problems will be identified to schools and offered pathways to relevant help according to the most accepted and approved processes identified by the focus groups. Nine months later, the same process will occur among delayed screening schools. At this same time, students in the immediate screening schools will complete screening for a second time. This will allow a direct comparison of the emotional health and help-seeking between youth whose schools had engaged in the screening and pathways to care process (immediate) and those whose schools had not engaged in the process (delayed). It is hypothesised that there will be a significant increase in students who receive help from mental health support services after screening, compared with baseline. It is also predicted that all students will show significantly less emotional distress after screening and access to pathways of care. This study will be an important contribution to Australian youth mental health prevention and early intervention by determining whether school screening leads to a greater number of young people with emotional disorders getting the help that they need and improving their mental health outcomes.Keywords: children and young people, early intervention, mental health, mental health screening, prevention, school-based mental health
Procedia PDF Downloads 96312 Design and Implementation of Generative Models for Odor Classification Using Electronic Nose
Authors: Kumar Shashvat, Amol P. Bhondekar
Abstract:
In the midst of the five senses, odor is the most reminiscent and least understood. Odor testing has been mysterious and odor data fabled to most practitioners. The delinquent of recognition and classification of odor is important to achieve. The facility to smell and predict whether the artifact is of further use or it has become undesirable for consumption; the imitation of this problem hooked on a model is of consideration. The general industrial standard for this classification is color based anyhow; odor can be improved classifier than color based classification and if incorporated in machine will be awfully constructive. For cataloging of odor for peas, trees and cashews various discriminative approaches have been used Discriminative approaches offer good prognostic performance and have been widely used in many applications but are incapable to make effectual use of the unlabeled information. In such scenarios, generative approaches have better applicability, as they are able to knob glitches, such as in set-ups where variability in the series of possible input vectors is enormous. Generative models are integrated in machine learning for either modeling data directly or as a transitional step to form an indeterminate probability density function. The algorithms or models Linear Discriminant Analysis and Naive Bayes Classifier have been used for classification of the odor of cashews. Linear Discriminant Analysis is a method used in data classification, pattern recognition, and machine learning to discover a linear combination of features that typifies or divides two or more classes of objects or procedures. The Naive Bayes algorithm is a classification approach base on Bayes rule and a set of qualified independence theory. Naive Bayes classifiers are highly scalable, requiring a number of restraints linear in the number of variables (features/predictors) in a learning predicament. The main recompenses of using the generative models are generally a Generative Models make stronger assumptions about the data, specifically, about the distribution of predictors given the response variables. The Electronic instrument which is used for artificial odor sensing and classification is an electronic nose. This device is designed to imitate the anthropological sense of odor by providing an analysis of individual chemicals or chemical mixtures. The experimental results have been evaluated in the form of the performance measures i.e. are accuracy, precision and recall. The investigational results have proven that the overall performance of the Linear Discriminant Analysis was better in assessment to the Naive Bayes Classifier on cashew dataset.Keywords: odor classification, generative models, naive bayes, linear discriminant analysis
Procedia PDF Downloads 387311 Comprehensive Geriatric Assessments: An Audit into Assessing and Improving Uptake on Geriatric Wards at King’s College Hospital, London
Authors: Michael Adebayo, Saheed Lawal
Abstract:
The Comprehensive Geriatric Assessment (CGA) is the multidimensional tool used to assess elderly, frail patients either on admission to hospital care or at a community level in primary care. It is a tool designed with the aim of using a holistic approach to managing patients. A Cochrane review of CGA use in 2011 found that the likelihood of being alive and living in their own home rises by 30% post-discharge. RCTs have also discovered 10–15% reductions in readmission rates and reductions in institutionalization, and resource use and costs. Past audit cycles at King’s College Hospital, Denmark Hill had shown inconsistent evidence of CGA completion inpatient discharge summaries (less than 50%). Junior Doctors in the Health and Ageing (HAU) wards have struggled to sustain the efforts of past audit cycles due to the quick turnover in staff (four-month placements for trainees). This 7th cycle created a multi-faceted approach to solving this problem amongst staff and creating lasting change. Methods: 1. We adopted multidisciplinary team involvement to support Doctors. MDT staff e.g. Nurses, Physiotherapists, Occupational Therapists and Dieticians, were actively encouraged to fill in the CGA document. 2. We added a CGA Document Pro-forma to “Sunrise EPR” (Trust computer system). These CGAs were to automatically be included the discharge summary. 3. Prior to assessing uptake, we used a spot audit questionnaire to assess staff awareness/knowledge of what a CGA was. 4. We designed and placed posters highlighting domains of CGA and MDT roles suited to each domain on geriatric “Health and Ageing Wards” (HAU) in the hospital. 5. We performed an audit of % discharge summaries which include CGA and MDT role input. 6. We nominated ward champions on each ward from each multidisciplinary specialty to monitor and encourage colleagues to actively complete CGAs. 7. We initiated further education of ward staff on CGA's importance by discussion at board rounds and weekly multidisciplinary meetings. Outcomes: 1. The majority of respondents to our spot audit were aware of what a CGA was, but fewer had used the EPR document to complete one. 2. We found that CGAs were not being commenced for nearly 50% of patients discharged on HAU wards and the Frailty Assessment Unit.Keywords: comprehensive geriatric assessment, CGA, multidisciplinary team, quality of life, mortality
Procedia PDF Downloads 84310 The Extent of Virgin Olive-Oil Prices' Distribution Revealing the Behavior of Market Speculators
Authors: Fathi Abid, Bilel Kaffel
Abstract:
The olive tree, the olive harvest during winter season and the production of olive oil better known by professionals under the name of the crushing operation have interested institutional traders such as olive-oil offices and private companies such as food industry refining and extracting pomace olive oil as well as export-import public and private companies specializing in olive oil. The major problem facing producers of olive oil each winter campaign, contrary to what is expected, it is not whether the harvest will be good or not but whether the sale price will allow them to cover production costs and achieve a reasonable margin of profit or not. These questions are entirely legitimate if we judge by the importance of the issue and the heavy complexity of the uncertainty and competition made tougher by a high level of indebtedness and the experience and expertise of speculators and producers whose objectives are sometimes conflicting. The aim of this paper is to study the formation mechanism of olive oil prices in order to learn about speculators’ behavior and expectations in the market, how they contribute by their industry knowledge and their financial alliances and the size the financial challenge that may be involved for them to build private information hoses globally to take advantage. The methodology used in this paper is based on two stages, in the first stage we study econometrically the formation mechanisms of olive oil price in order to understand the market participant behavior by implementing ARMA, SARMA, GARCH and stochastic diffusion processes models, the second stage is devoted to prediction purposes, we use a combined wavelet- ANN approach. Our main findings indicate that olive oil market participants interact with each other in a way that they promote stylized facts formation. The unstable participant’s behaviors create the volatility clustering, non-linearity dependent and cyclicity phenomena. By imitating each other in some periods of the campaign, different participants contribute to the fat tails observed in the olive oil price distribution. The best prediction model for the olive oil price is based on a back propagation artificial neural network approach with input information based on wavelet decomposition and recent past history.Keywords: olive oil price, stylized facts, ARMA model, SARMA model, GARCH model, combined wavelet-artificial neural network, continuous-time stochastic volatility mode
Procedia PDF Downloads 339309 A Machine Learning Approach for Efficient Resource Management in Construction Projects
Authors: Soheila Sadeghi
Abstract:
Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.Keywords: resource allocation, machine learning, optimization, data-driven decision-making, project management
Procedia PDF Downloads 38308 Water Supply and Demand Analysis for Ranchi City under Climate Change Using Water Evaluation and Planning System Model
Authors: Pappu Kumar, Ajai Singh, Anshuman Singh
Abstract:
There are different water user sectors such as rural, urban, mining, subsistence and commercial irrigated agriculture, commercial forestry, industry, power generation which are present in the catchment in Subarnarekha River Basin and Ranchi city. There is an inequity issue in the access to water. The development of the rural area, construction of new power generation plants, along with the population growth, the requirement of unmet water demand and the consideration of environmental flows, the revitalization of small-scale irrigation schemes is going to increase the water demands in almost all the water-stressed catchment. The WEAP Model was developed by the Stockholm Environment Institute (SEI) to enable evaluation of planning and management issues associated with water resources development. The WEAP model can be used for both urban and rural areas and can address a wide range of issues including sectoral demand analyses, water conservation, water rights and allocation priorities, river flow simulation, reservoir operation, ecosystem requirements and project cost-benefit analyses. This model is a tool for integrated water resource management and planning like, forecasting water demand, supply, inflows, outflows, water use, reuse, water quality, priority areas and Hydropower generation, In the present study, efforts have been made to access the utility of the WEAP model for water supply and demand analysis for Ranchi city. A detailed works have been carried out and it was tried to ascertain that the WEAP model used for generating different scenario of water requirement, which could help for the future planning of water. The water supplied to Ranchi city was mostly contributed by our study river, Hatiya reservoir and ground water. Data was collected from various agencies like PHE Ranchi, census data of 2011, Doranda reservoir and meteorology department etc. This collected and generated data was given as input to the WEAP model. The model generated the trends for discharge of our study river up to next 2050 and same time also generated scenarios calculating our demand and supplies for feature. The results generated from the model outputs predicting the water require 12 million litter. The results will help in drafting policies for future regarding water supplies and demands under changing climatic scenarios.Keywords: WEAP model, water demand analysis, Ranchi, scenarios
Procedia PDF Downloads 419307 Establishing Community-Based Pro-Biodiversity Enterprise in the Philippines: A Climate Change Adaptation Strategy towards Agro-Biodiversity Conservation and Local Green Economic Development
Authors: Dina Magnaye
Abstract:
In the Philippines, the performance of the agricultural sector is gauged through crop productivity and returns from farm production rather than the biodiversity in the agricultural ecosystem. Agricultural development hinges on the overall goal of increasing productivity through intensive agriculture, monoculture system, utilization of high yielding varieties in plants, and genetic upgrading in animals. This merits an analysis of the role of agro-biodiversity in terms of increasing productivity, food security and economic returns from community-based pro-biodiversity enterprises. These enterprises conserve biodiversity while equitably sharing production income in the utilization of biological resources. The study aims to determine how community-based pro-biodiversity enterprises become instrumental in local climate change adaptation and agro-biodiversity conservation as input to local green economic development planning. It also involves an assessment of the role of agrobiodiversity in terms of increasing productivity, food security and economic returns from community-based pro-biodiversity enterprises. The perceptions of the local community members both in urban and upland rural areas on community-based pro-biodiversity enterprises were evaluated. These served as a basis in developing a planning modality that can be mainstreamed in the management of local green economic enterprises to benefit the environment, provide local income opportunities, conserve species diversity, and sustain environment-friendly farming systems and practices. The interviews conducted with organic farmer-owners, entrepreneur-organic farmers, and organic farm workers revealed that pro-biodiversity enterprise such as organic farming involved the cyclic use of natural resources within the carrying capacity of a farm; recognition of the value of tradition and culture especially in the upland rural area; enhancement of socio-economic capacity; conservation of ecosystems in harmony with nature; and climate change mitigation. The suggested planning modality for community-based pro-biodiversity enterprises for a green economy encompasses four (4) phases to include community resource or capital asset profiling; stakeholder vision development; strategy formulation for sustained enterprises; and monitoring and evaluation.Keywords: agro-biodiversity, agro-biodiversity conservation, local green economy, organic farming, pro-biodiversity enterprise
Procedia PDF Downloads 362306 Engineering Topology of Ecological Model for Orientation Impact of Sustainability Urban Environments: The Spatial-Economic Modeling
Authors: Moustafa Osman Mohammed
Abstract:
The modeling of a spatial-economic database is crucial in recitation economic network structure to social development. Sustainability within the spatial-economic model gives attention to green businesses to comply with Earth’s Systems. The natural exchange patterns of ecosystems have consistent and periodic cycles to preserve energy and materials flow in systems ecology. When network topology influences formal and informal communication to function in systems ecology, ecosystems are postulated to valence the basic level of spatial sustainable outcome (i.e., project compatibility success). These referred instrumentalities impact various aspects of the second level of spatial sustainable outcomes (i.e., participant social security satisfaction). The sustainability outcomes are modeling composite structure based on a network analysis model to calculate the prosperity of panel databases for efficiency value, from 2005 to 2025. The database is modeling spatial structure to represent state-of-the-art value-orientation impact and corresponding complexity of sustainability issues (e.g., build a consistent database necessary to approach spatial structure; construct the spatial-economic-ecological model; develop a set of sustainability indicators associated with the model; allow quantification of social, economic and environmental impact; use the value-orientation as a set of important sustainability policy measures), and demonstrate spatial structure reliability. The structure of spatial-ecological model is established for management schemes from the perspective pollutants of multiple sources through the input–output criteria. These criteria evaluate the spillover effect to conduct Monte Carlo simulations and sensitivity analysis in a unique spatial structure. The balance within “equilibrium patterns,” such as collective biosphere features, has a composite index of many distributed feedback flows. The following have a dynamic structure related to physical and chemical properties for gradual prolong to incremental patterns. While these spatial structures argue from ecological modeling of resource savings, static loads are not decisive from an artistic/architectural perspective. The model attempts to unify analytic and analogical spatial structure for the development of urban environments in a relational database setting, using optimization software to integrate spatial structure where the process is based on the engineering topology of systems ecology.Keywords: ecological modeling, spatial structure, orientation impact, composite index, industrial ecology
Procedia PDF Downloads 68305 A webGIS Methodology to Support Sediments Management in Wallonia
Authors: Nathalie Stephenne, Mathieu Veschkens, Stéphane Palm, Christophe Charlemagne, Jacques Defoux
Abstract:
According to Europe’s first River basin Management Plans (RBMPs), 56% of European rivers failed to achieve the good status targets of the Water Framework Directive WFD. In Central European countries such as Belgium, even more than 80% of rivers failed to achieve the WFD quality targets. Although the RBMP’s should reduce the stressors and improve water body status, their potential to address multiple stress situations is limited due to insufficient knowledge on combined effects, multi-stress, prioritization of measures, impact on ecology and implementation effects. This paper describes a webGis prototype developed for the Walloon administration to improve the communication and the management of sediment dredging actions carried out in rivers and lakes in the frame of RBMPs. A large number of stakeholders are involved in the management of rivers and lakes in Wallonia. They are in charge of technical aspects (client and dredging operators, organizations involved in the treatment of waste…), management (managers involved in WFD implementation at communal, provincial or regional level) or policy making (people responsible for policy compliance or legislation revision). These different kinds of stakeholders need different information and data to cover their duties but have to interact closely at different levels. Moreover, information has to be shared between them to improve the management quality of dredging operations within the ecological system. In the Walloon legislation, leveling dredged sediments on banks requires an official authorization from the administration. This request refers to spatial information such as the official land use map, the cadastral map, the distance to potential pollution sources. The production of a collective geodatabase can facilitate the management of these authorizations from both sides. The proposed internet system integrates documents, data input, integration of data from disparate sources, map representation, database queries, analysis of monitoring data, presentation of results and cartographic visualization. A prototype of web application using the API geoviewer chosen by the Geomatic department of the SPW has been developed and discussed with some potential users to facilitate the communication, the management and the quality of the data. The structure of the paper states the why, what, who and how of this communication tool.Keywords: sediments, web application, GIS, rivers management
Procedia PDF Downloads 405304 Optimized Renewable Energy Mix for Energy Saving in Waste Water Treatment Plants
Authors: J. D. García Espinel, Paula Pérez Sánchez, Carlos Egea Ruiz, Carlos Lardín Mifsut, Andrés López-Aranguren Oliver
Abstract:
This paper shortly describes three main actuations over a Waste Water Treatment Plant (WWTP) for reducing its energy consumption: Optimization of the biological reactor in the aeration stage by including new control algorithms and introducing new efficient equipment, the installation of an innovative hybrid system with zero Grid injection (formed by 100kW of PV energy and 5 kW of mini-wind energy generation) and an intelligent management system for load consumption and energy generation control in the most optimum way. This project called RENEWAT, involved in the European Commission call LIFE 2013, has the main objective of reducing the energy consumptions through different actions on the processes which take place in a WWTP and introducing renewable energies on these treatment plants, with the purpose of promoting the usage of treated waste water for irrigation and decreasing the C02 gas emissions. WWTP is always required before waste water can be reused for irrigation or discharged in water bodies. However, the energetic demand of the treatment process is high enough for making the price of treated water to exceed the one for drinkable water. This makes any policy very difficult to encourage the re-use of treated water, with a great impact on the water cycle, particularly in those areas suffering hydric stress or deficiency. The cost of treating waste water involves another climate-change related burden: the energy necessary for the process is obtained mainly from the electric network, which is, in most of the cases in Europe, energy obtained from the burning of fossil fuels. The innovative part of this project is based on the implementation, adaptation and integration of solutions for this problem, together with a new concept of the integration of energy input and operative energy demand. Moreover, there is an important qualitative jump between the technologies used and the alleged technologies to use in the project which give it an innovative character, due to the fact that there are no similar previous experiences of a WWTP including an intelligent discrimination of energy sources, integrating renewable ones (PV and Wind) and the grid.Keywords: aeration system, biological reactor, CO2 emissions, energy efficiency, hybrid systems, LIFE 2013 call, process optimization, renewable energy sources, wasted water treatment plants
Procedia PDF Downloads 352303 Implementing Peer Mediated Interventions with Visual Supports for Social Skills Development in a School-Based Work Setting with Secondary Students with Autism
Authors: Karen Eastman
Abstract:
More youths and young adults with autism spectrum disorder (ASD) have been entering the workforce in recent years. Historically, students with ASD struggle after leaving high school and experience lower rates of employment, with social skills continuing to be the most problematic area of concern. Special education teachers may find it challenging to identify effective combinations of evidence-based practices (EBPs) and supports to best guide these students. One EBP, Peer Mediated Instruction and Intervention (PMII) has been well documented in the literature as being effective for younger students with autism but not researched as much with older students and adults, particularly in work settings. A need to combine PMII with other EBPs has been identified as a way to achieve a greater positive impact rather than any practice alone. A multiple baseline across skills design was used in this research project with two participants in different settings. PMII was combined with Visual Supports, with typical peers being trained in both practices. PMII is an evidence-based practice used to address social concerns by training peers without disabilities as to how they can provide feedback to and support, the student with ASD with social interactions in structured settings. The peers without disabilities were the instructors, while the adults facilitated the social situations and provided support to both the peers and students with ASD when needed. Because many individuals with ASD learn best with visual input, rather than using only the spoken word (verbal directions and feedback), Visual Supports were used in conjunction with PMII. Visual Supports can include written words, pictures, symbols, videos, or objects. In this project, the Visual Supports used were written social scripts, videos, Stop and Think signs, written reminder cards, a school map, and a pictorial task analysis of work tasks. Variables that may affect intervention outcomes in this project included attendance at school and school-based work settings for both the students with ASD and the peers without disabilities and behaviors and responses from others in the settings. Qualitative data was also collected from observations and surveys with peers about the process and their role. Data indicated that the students with ASD responded more positively to redirection and support from their peers than to teachers and staff and showed an increase in positive interactions with others. Those surveyed indicated a positive attitude toward and response to the use of peer interventions with visual supports.Keywords: autism, social skills, vocational training, peer interventions
Procedia PDF Downloads 42302 Contributions of Natural and Human Activities to Urban Surface Runoff with Different Hydrological Scenarios (Orléans, France)
Authors: Al-Juhaishi Mohammed, Mikael Motelica-Heino, Fabrice Muller, Audrey Guirimand-Dufour, Christian Défarge
Abstract:
This study aims at improving the urban hydrological cycle of the Orléans agglomeration (France) and understanding the relationship between physical and chemical parameters of urban surface runoff and the hydrological conditions. In particular water quality parameters such as pH, conductivity, total dissolved solids, major dissolved cations and anions, and chemical and biological oxygen demands were monitored for three types of urban water discharges (wastewater treatment plant output (WWTP), storm overflow and stormwater outfall) under two hydrologic scenarii (dry and wet weather). The first results were obtained over a period of five months.Each investigated (Ormes and l’Egoutier) outfall represents an urban runoff source that receives water from runoff roads, gutters, the irrigation of gardens and other sources of flow over the Earth’s surface that drains in its catchments and carries it to the Loire River. In wet weather conditions there is rain water runoff and an additional input from the roof gutters that have entered the stormwater system during rainfall. For the comparison the results La Chilesse is a storm overflow that was selected in our study as a potential source of waste water which is located before the (WWTP).The comparison of the physical-chemical parameters (total dissolved solids, turbidity, pH, conductivity, dissolved organic carbon (DOC), concentration of major cations and anions) together with the chemical oxygen demand (COD) and biological oxygen demand (BOD) helped to characterize sources of runoff waters in the different watersheds. It also helped to highlight the infiltration of wastewater in some stormwater systems that reject directly in the Loire River. The values of the conductivity measured in the outflow of Ormes were always higher than those measured in the other two outlets. The results showed a temporal variation for the Ormes outfall of conductivity from 1465 µS cm-1 in the dry weather flow to 650 µS cm-1 in the wet weather flow and also a spatial variation in the wet weather flow from 650 µS cm-1 in the Ormes outfall to 281 μS cm-1 in L’Egouttier outfall. The ultimate BOD (BOD28) showed a significant decrease in La Corne outfall from 210 mg L-1 in the wet weather flow to 75 mg L-1 in the dry weather flow because of the nutrient load that was transported by the runoff.Keywords: BOD, COD, the Loire River, urban hydrology, urban dry and wet weather discharges, macronutrients
Procedia PDF Downloads 266301 Design, Numerical Simulation, Fabrication and Physical Experimentation of the Tesla’s Cohesion Type Bladeless Turbine
Authors: M.Sivaramakrishnaiah, D. S .Nasan, P. V. Subhanjeneyulu, J. A. Sandeep Kumar, N. Sreenivasulu, B. V. Amarnath Reddy, B. Veeralingam
Abstract:
Design, numerical simulation, fabrication, and physical experimentation of the Tesla’s Bladeless centripetal turbine for generating electrical power are presented in this research paper. 29 Pressurized air combined with water via a nozzle system is made to pass tangentially through a set of parallel smooth discs surfaces, which impart rotational motion to the discs fastened common shaft for the power generation. The power generated depends upon the fluid speed parameter leaving the nozzle inlet. Physically due to laminar boundary layer phenomena at smooth disc surface, the high speed fluid layers away from the plate moving against the low speed fluid layers nearer to the plate develop a tangential drag from the viscous shear forces. This compels the nearer layers to drag along with the high layers causing the disc to spin. Solid Works design software and fluid mechanics and machine elements design theories was used to compute mechanical design specifications of turbine parts like 48 mm diameter discs, common shaft, central exhaust, plenum chamber, swappable nozzle inlets, etc. Also, ANSYS CFX 2018 was used for the numerical 2 simulation of the physical phenomena encountered in the turbine working. When various numerical simulation and physical experimental results were verified, there is good agreement between them 6, both quantitatively and qualitatively. The sources of input and size of the blades may affect the power generated and turbine efficiency, respectively. The results may change if there is a change in the fluid flowing between the discs. The inlet fluid pressure versus turbine efficiency and the number of discs versus turbine power studies based on both results were carried out to develop the 8 relationships between the inlet and outlet parameters of the turbine. The present research work obtained the turbine efficiency in the range of 7-10%, and for this range; the electrical power output generated was 50-60 W.Keywords: tesla turbine, cohesion type bladeless turbine, boundary layer theory, cohesion type bladeless turbine, tangential fluid flow, viscous and adhesive forces, plenum chamber, pico hydro systems
Procedia PDF Downloads 87300 Towards Creative Movie Title Generation Using Deep Neural Models
Authors: Simon Espigolé, Igor Shalyminov, Helen Hastie
Abstract:
Deep machine learning techniques including deep neural networks (DNN) have been used to model language and dialogue for conversational agents to perform tasks, such as giving technical support and also for general chit-chat. They have been shown to be capable of generating long, diverse and coherent sentences in end-to-end dialogue systems and natural language generation. However, these systems tend to imitate the training data and will only generate the concepts and language within the scope of what they have been trained on. This work explores how deep neural networks can be used in a task that would normally require human creativity, whereby the human would read the movie description and/or watch the movie and come up with a compelling, interesting movie title. This task differs from simple summarization in that the movie title may not necessarily be derivable from the content or semantics of the movie description. Here, we train a type of DNN called a sequence-to-sequence model (seq2seq) that takes as input a short textual movie description and some information on e.g. genre of the movie. It then learns to output a movie title. The idea is that the DNN will learn certain techniques and approaches that the human movie titler may deploy that may not be immediately obvious to the human-eye. To give an example of a generated movie title, for the movie synopsis: ‘A hitman concludes his legacy with one more job, only to discover he may be the one getting hit.’; the original, true title is ‘The Driver’ and the one generated by the model is ‘The Masquerade’. A human evaluation was conducted where the DNN output was compared to the true human-generated title, as well as a number of baselines, on three 5-point Likert scales: ‘creativity’, ‘naturalness’ and ‘suitability’. Subjects were also asked which of the two systems they preferred. The scores of the DNN model were comparable to the scores of the human-generated movie title, with means m=3.11, m=3.12, respectively. There is room for improvement in these models as they were rated significantly less ‘natural’ and ‘suitable’ when compared to the human title. In addition, the human-generated title was preferred overall 58% of the time when pitted against the DNN model. These results, however, are encouraging given the comparison with a highly-considered, well-crafted human-generated movie title. Movie titles go through a rigorous process of assessment by experts and focus groups, who have watched the movie. This process is in place due to the large amount of money at stake and the importance of creating an effective title that captures the audiences’ attention. Our work shows progress towards automating this process, which in turn may lead to a better understanding of creativity itself.Keywords: creativity, deep machine learning, natural language generation, movies
Procedia PDF Downloads 326299 Modeling and Optimizing of Sinker Electric Discharge Machine Process Parameters on AISI 4140 Alloy Steel by Central Composite Rotatable Design Method
Authors: J. Satya Eswari, J. Sekhar Babub, Meena Murmu, Govardhan Bhat
Abstract:
Electrical Discharge Machining (EDM) is an unconventional manufacturing process based on removal of material from a part by means of a series of repeated electrical sparks created by electric pulse generators at short intervals between a electrode tool and the part to be machined emmersed in dielectric fluid. In this paper, a study will be performed on the influence of the factors of peak current, pulse on time, interval time and power supply voltage. The output responses measured were material removal rate (MRR) and surface roughness. Finally, the parameters were optimized for maximum MRR with the desired surface roughness. RSM involves establishing mathematical relations between the design variables and the resulting responses and optimizing the process conditions. RSM is not free from problems when it is applied to multi-factor and multi-response situations. Design of experiments (DOE) technique to select the optimum machining conditions for machining AISI 4140 using EDM. The purpose of this paper is to determine the optimal factors of the electro-discharge machining (EDM) process investigate feasibility of design of experiment techniques. The work pieces used were rectangular plates of AISI 4140 grade steel alloy. The study of optimized settings of key machining factors like pulse on time, gap voltage, flushing pressure, input current and duty cycle on the material removal, surface roughness is been carried out using central composite design. The objective is to maximize the Material removal rate (MRR). Central composite design data is used to develop second order polynomial models with interaction terms. The insignificant coefficients’ are eliminated with these models by using student t test and F test for the goodness of fit. CCD is first used to establish the determine the optimal factors of the electro-discharge machining (EDM) for maximizing the MRR. The responses are further treated through a objective function to establish the same set of key machining factors to satisfy the optimization problem of the electro-discharge machining (EDM) process. The results demonstrate the better performance of CCD data based RSM for optimizing the electro-discharge machining (EDM) process.Keywords: electric discharge machining (EDM), modeling, optimization, CCRD
Procedia PDF Downloads 341298 Considering Uncertainties of Input Parameters on Energy, Environmental Impacts and Life Cycle Costing by Monte Carlo Simulation in the Decision Making Process
Authors: Johannes Gantner, Michael Held, Matthias Fischer
Abstract:
The refurbishment of the building stock in terms of energy supply and efficiency is one of the major challenges of the German turnaround in energy policy. As the building sector accounts for 40% of Germany’s total energy demand, additional insulation is key for energy efficient refurbished buildings. Nevertheless the energetic benefits often the environmental and economic performances of insulation materials are questioned. The methods Life Cycle Assessment (LCA) as well as Life Cycle Costing (LCC) can form the standardized basis for answering this doubts and more and more become important for material producers due efforts such as Product Environmental Footprint (PEF) or Environmental Product Declarations (EPD). Due to increasing use of LCA and LCC information for decision support the robustness and resilience of the results become crucial especially for support of decision and policy makers. LCA and LCC results are based on respective models which depend on technical parameters like efficiencies, material and energy demand, product output, etc.. Nevertheless, the influence of parameter uncertainties on lifecycle results are usually not considered or just studied superficially. Anyhow the effect of parameter uncertainties cannot be neglected. Based on the example of an exterior wall the overall lifecycle results are varying by a magnitude of more than three. As a result simple best case worst case analyses used in practice are not sufficient. These analyses allow for a first rude view on the results but are not taking effects into account such as error propagation. Thereby LCA practitioners cannot provide further guidance for decision makers. Probabilistic analyses enable LCA practitioners to gain deeper understanding of the LCA and LCC results and provide a better decision support. Within this study, the environmental and economic impacts of an exterior wall system over its whole lifecycle are illustrated, and the effect of different uncertainty analysis on the interpretation in terms of resilience and robustness are shown. Hereby the approaches of error propagation and Monte Carlo Simulations are applied and combined with statistical methods in order to allow for a deeper understanding and interpretation. All in all this study emphasis the need for a deeper and more detailed probabilistic evaluation based on statistical methods. Just by this, misleading interpretations can be avoided, and the results can be used for resilient and robust decisions.Keywords: uncertainty, life cycle assessment, life cycle costing, Monte Carlo simulation
Procedia PDF Downloads 286297 The Outcome of Using Machine Learning in Medical Imaging
Authors: Adel Edwar Waheeb Louka
Abstract:
Purpose AI-driven solutions are at the forefront of many pathology and medical imaging methods. Using algorithms designed to better the experience of medical professionals within their respective fields, the efficiency and accuracy of diagnosis can improve. In particular, X-rays are a fast and relatively inexpensive test that can diagnose diseases. In recent years, X-rays have not been widely used to detect and diagnose COVID-19. The under use of Xrays is mainly due to the low diagnostic accuracy and confounding with pneumonia, another respiratory disease. However, research in this field has expressed a possibility that artificial neural networks can successfully diagnose COVID-19 with high accuracy. Models and Data The dataset used is the COVID-19 Radiography Database. This dataset includes images and masks of chest X-rays under the labels of COVID-19, normal, and pneumonia. The classification model developed uses an autoencoder and a pre-trained convolutional neural network (DenseNet201) to provide transfer learning to the model. The model then uses a deep neural network to finalize the feature extraction and predict the diagnosis for the input image. This model was trained on 4035 images and validated on 807 separate images from the ones used for training. The images used to train the classification model include an important feature: the pictures are cropped beforehand to eliminate distractions when training the model. The image segmentation model uses an improved U-Net architecture. This model is used to extract the lung mask from the chest X-ray image. The model is trained on 8577 images and validated on a validation split of 20%. These models are calculated using the external dataset for validation. The models’ accuracy, precision, recall, f1-score, IOU, and loss are calculated. Results The classification model achieved an accuracy of 97.65% and a loss of 0.1234 when differentiating COVID19-infected, pneumonia-infected, and normal lung X-rays. The segmentation model achieved an accuracy of 97.31% and an IOU of 0.928. Conclusion The models proposed can detect COVID-19, pneumonia, and normal lungs with high accuracy and derive the lung mask from a chest X-ray with similarly high accuracy. The hope is for these models to elevate the experience of medical professionals and provide insight into the future of the methods used.Keywords: artificial intelligence, convolutional neural networks, deeplearning, image processing, machine learningSarapin, intraarticular, chronic knee pain, osteoarthritisFNS, trauma, hip, neck femur fracture, minimally invasive surgery
Procedia PDF Downloads 73296 Synthetic Classicism: A Machine Learning Approach to the Recognition and Design of Circular Pavilions
Authors: Federico Garrido, Mostafa El Hayani, Ahmed Shams
Abstract:
The exploration of the potential of artificial intelligence (AI) in architecture is still embryonic, however, its latent capacity to change design disciplines is significant. 'Synthetic Classism' is a research project that questions the underlying aspects of classically organized architecture not just in aesthetic terms but also from a geometrical and morphological point of view, intending to generate new architectural information using historical examples as source material. The main aim of this paper is to explore the uses of artificial intelligence and machine learning algorithms in architectural design while creating a coherent narrative to be contained within a design process. The purpose is twofold: on one hand, to develop and train machine learning algorithms to produce architectural information of small pavilions and on the other, to synthesize new information from previous architectural drawings. These algorithms intend to 'interpret' graphical information from each pavilion and then generate new information from it. The procedure, once these algorithms are trained, is the following: parting from a line profile, a synthetic 'front view' of a pavilion is generated, then using it as a source material, an isometric view is created from it, and finally, a top view is produced. Thanks to GAN algorithms, it is also possible to generate Front and Isometric views without any graphical input as well. The final intention of the research is to produce isometric views out of historical information, such as the pavilions from Sebastiano Serlio, James Gibbs, or John Soane. The idea is to create and interpret new information not just in terms of historical reconstruction but also to explore AI as a novel tool in the narrative of a creative design process. This research also challenges the idea of the role of algorithmic design associated with efficiency or fitness while embracing the possibility of a creative collaboration between artificial intelligence and a human designer. Hence the double feature of this research, both analytical and creative, first by synthesizing images based on a given dataset and then by generating new architectural information from historical references. We find that the possibility of creatively understand and manipulate historic (and synthetic) information will be a key feature in future innovative design processes. Finally, the main question that we propose is whether an AI could be used not just to create an original and innovative group of simple buildings but also to explore the possibility of fostering a novel architectural sensibility grounded on the specificities on the architectural dataset, either historic, human-made or synthetic.Keywords: architecture, central pavilions, classicism, machine learning
Procedia PDF Downloads 140295 Mechanical Characterization and CNC Rotary Ultrasonic Grinding of Crystal Glass
Authors: Ricardo Torcato, Helder Morais
Abstract:
The manufacture of crystal glass parts is based on obtaining the rough geometry by blowing and/or injection, generally followed by a set of manual finishing operations using cutting and grinding tools. The forming techniques used do not allow the obtainment, with repeatability, of parts with complex shapes and the finishing operations use intensive specialized labor resulting in high cycle times and production costs. This work aims to explore the digital manufacture of crystal glass parts by investigating new subtractive techniques for the automated, flexible finishing of these parts. Finishing operations are essential to respond to customer demands in terms of crystal feel and shine. It is intended to investigate the applicability of different computerized finishing technologies, namely milling and grinding in a CNC machining center with or without ultrasonic assistance, to crystal processing. Research in the field of grinding hard and brittle materials, despite not being extensive, has increased in recent years, and scientific knowledge about the machinability of crystal glass is still very limited. However, it can be said that the unique properties of glass, such as high hardness and very low toughness, make any glass machining technology a very challenging process. This work will measure the performance improvement brought about by the use of ultrasound compared to conventional crystal grinding. This presentation is focused on the mechanical characterization and analysis of the cutting forces in CNC machining of superior crystal glass (Pb ≥ 30%). For the mechanical characterization, the Vickers hardness test provides an estimate of the material hardness (Hv) and the fracture toughness based on cracks that appear in the indentation. Mechanical impulse excitation test estimates the Young’s Modulus, shear modulus and Poisson ratio of the material. For the cutting forces, it a dynamometer was used to measure the forces in the face grinding process. The tests were made based on the Taguchi method to correlate the input parameters (feed rate, tool rotation speed and depth of cut) with the output parameters (surface roughness and cutting forces) to optimize the process (better roughness using the cutting forces that do not compromise the material structure and the tool life) using ANOVA. This study was conducted for conventional grinding and for the ultrasonic grinding process with the same cutting tools. It was possible to determine the optimum cutting parameters for minimum cutting forces and for minimum surface roughness in both grinding processes. Ultrasonic-assisted grinding provides a better surface roughness than conventional grinding.Keywords: CNC machining, crystal glass, cutting forces, hardness
Procedia PDF Downloads 153294 Control for Fluid Flow Behaviours of Viscous Fluids and Heat Transfer in Mini-Channel: A Case Study Using Numerical Simulation Method
Authors: Emmanuel Ophel Gilbert, Williams Speret
Abstract:
The control for fluid flow behaviours of viscous fluids and heat transfer occurrences within heated mini-channel is considered. Heat transfer and flow characteristics of different viscous liquids, such as engine oil, automatic transmission fluid, one-half ethylene glycol, and deionized water were numerically analyzed. Some mathematical applications such as Fourier series and Laplace Z-Transforms were employed to ascertain the behaviour-wave like structure of these each viscous fluids. The steady, laminar flow and heat transfer equations are reckoned by the aid of numerical simulation technique. Further, this numerical simulation technique is endorsed by using the accessible practical values in comparison with the anticipated local thermal resistances. However, the roughness of this mini-channel that is one of the physical limitations was also predicted in this study. This affects the frictional factor. When an additive such as tetracycline was introduced in the fluid, the heat input was lowered, and this caused pro rata effect on the minor and major frictional losses, mostly at a very minute Reynolds number circa 60-80. At this ascertained lower value of Reynolds numbers, there exists decrease in the viscosity and minute frictional losses as a result of the temperature of these viscous liquids been increased. It is inferred that the three equations and models are identified which supported the numerical simulation via interpolation and integration of the variables extended to the walls of the mini-channel, yields the utmost reliance for engineering and technology calculations for turbulence impacting jets in the near imminent age. Out of reasoning with a true equation that could support this control for the fluid flow, Navier-stokes equations were found to tangential to this finding. Though, other physical factors with respect to these Navier-stokes equations are required to be checkmated to avoid uncertain turbulence of the fluid flow. This paradox is resolved within the framework of continuum mechanics using the classical slip condition and an iteration scheme via numerical simulation method that takes into account certain terms in the full Navier-Stokes equations. However, this resulted in dropping out in the approximation of certain assumptions. Concrete questions raised in the main body of the work are sightseen further in the appendices.Keywords: frictional losses, heat transfer, laminar flow, mini-channel, number simulation, Reynolds number, turbulence, viscous fluids
Procedia PDF Downloads 176293 Economic Impact of Drought on Agricultural Society: Evidence Based on a Village Study in Maharashtra, India
Authors: Harshan Tee Pee
Abstract:
Climate elements include surface temperatures, rainfall patterns, humidity, type and amount of cloudiness, air pressure and wind speed and direction. Change in one element can have an impact on the regional climate. The scientific predictions indicate that global climate change will increase the number of extreme events, leading to more frequent natural hazards. Global warming is likely to intensify the risk of drought in certain parts and also leading to increased rainfall in some other parts. Drought is a slow advancing disaster and creeping phenomenon– which accumulate slowly over a long period of time. Droughts are naturally linked with aridity. But droughts occur over most parts of the world (both wet and humid regions) and create severe impacts on agriculture, basic household welfare and ecosystems. Drought condition occurs at least every three years in India. India is one among the most vulnerable drought prone countries in the world. The economic impacts resulting from extreme environmental events and disasters are huge as a result of disruption in many economic activities. The focus of this paper is to develop a comprehensive understanding about the distributional impacts of disaster, especially impact of drought on agricultural production and income through a panel study (drought year and one year after the drought) in Raikhel village, Maharashtra, India. The major findings of the study indicate that cultivating area as well as the number of cultivating households reduced after the drought, indicating a shift in the livelihood- households moved from agriculture to non-agriculture. Decline in the gross cropped area and production of various crops depended on the negative income from these crops in the previous agriculture season. All the landholding categories of households except landlords had negative income in the drought year and also the income disparities between the households were higher in that year. In the drought year, the cost of cultivation was higher for all the landholding categories due to the increased cost for irrigation and input cost. In the drought year, agriculture products (50 per cent of the total products) were used for household consumption rather than selling in the market. It is evident from the study that livelihood which was based on natural resources became less attractive to the people to due to the risk involved in it and people were moving to less risk livelihood for their sustenance.Keywords: climate change, drought, agriculture economics, disaster impact
Procedia PDF Downloads 118292 A User-Directed Approach to Optimization via Metaprogramming
Authors: Eashan Hatti
Abstract:
In software development, programmers often must make a choice between high-level programming and high-performance programs. High-level programming encourages the use of complex, pervasive abstractions. However, the use of these abstractions degrades performance-high performance demands that programs be low-level. In a compiler, the optimizer attempts to let the user have both. The optimizer takes high-level, abstract code as an input and produces low-level, performant code as an output. However, there is a problem with having the optimizer be a built-in part of the compiler. Domain-specific abstractions implemented as libraries are common in high-level languages. As a language’s library ecosystem grows, so does the number of abstractions that programmers will use. If these abstractions are to be performant, the optimizer must be extended with new optimizations to target them, or these abstractions must rely on existing general-purpose optimizations. The latter is often not as effective as needed. The former presents too significant of an effort for the compiler developers, as they are the only ones who can extend the language with new optimizations. Thus, the language becomes more high-level, yet the optimizer – and, in turn, program performance – falls behind. Programmers are again confronted with a choice between high-level programming and high-performance programs. To investigate a potential solution to this problem, we developed Peridot, a prototype programming language. Peridot’s main contribution is that it enables library developers to easily extend the language with new optimizations themselves. This allows the optimization workload to be taken off the compiler developers’ hands and given to a much larger set of people who can specialize in each problem domain. Because of this, optimizations can be much more effective while also being much more numerous. To enable this, Peridot supports metaprogramming designed for implementing program transformations. The language is split into two fragments or “levels”, one for metaprogramming, the other for high-level general-purpose programming. The metaprogramming level supports logic programming. Peridot’s key idea is that optimizations are simply implemented as metaprograms. The meta level supports several specific features which make it particularly suited to implementing optimizers. For instance, metaprograms can automatically deduce equalities between the programs they are optimizing via unification, deal with variable binding declaratively via higher-order abstract syntax, and avoid the phase-ordering problem via non-determinism. We have found that this design centered around logic programming makes optimizers concise and easy to write compared to their equivalents in functional or imperative languages. Overall, implementing Peridot has shown that its design is a viable solution to the problem of writing code which is both high-level and performant.Keywords: optimization, metaprogramming, logic programming, abstraction
Procedia PDF Downloads 87291 Using Signature Assignments and Rubrics in Assessing Institutional Learning Outcomes and Student Learning
Authors: Leigh Ann Wilson, Melanie Borrego
Abstract:
The purpose of institutional learning outcomes (ILOs) is to assess what students across the university know and what they do not. The issue is gathering this information in a systematic and usable way. This presentation will explain how one institution has engineered this process for both student success and maximum faculty curriculum and course design input. At Brandman University, there are three levels of learning outcomes: course, program, and institutional. Institutional Learning Outcomes (ILOs) are mapped to specific courses. Faculty course developers write the signature assignments (SAs) in alignment with the Institutional Learning Outcomes for each course. These SAs use a specific rubric that is applied consistently by every section and every instructor. Each year, the 12-member General Education Team (GET), as a part of their work, conducts the calibration and assessment of the university-wide SAs and the related rubrics for one or two of the five ILOs. GET members, who are senior faculty and administrators who represent each of the university's schools, lead the calibration meetings. Specifically, calibration is a process designed to ensure the accuracy and reliability of evaluating signature assignments by working with peer faculty to interpret rubrics and compare scoring. These calibration meetings include the full time and adjunct faculty members who teach the course to ensure consensus on the application of the rubric. Each calibration session is chaired by a GET representative as well as the course custodian/contact where the ILO signature assignment resides. The overall calibration process GET follows includes multiple steps, such as: contacting and inviting relevant faculty members to participate; organizing and hosting calibration sessions; and reviewing and discussing at least 10 samples of student work from class sections during the previous academic year, for each applicable signature assignment. Conversely, the commitment for calibration teams consist of attending two virtual meetings lasting up to three hours in duration. The first meeting focuses on interpreting the rubric, and the second meeting involves comparing scores for sample work and sharing feedback about the rubric and assignment. Next, participants are expected to follow all directions provided and participate actively, and respond to scheduling requests and other emails within 72 hours. The virtual meetings are recorded for future institutional use. Adjunct faculty are paid a small stipend after participating in both calibration meetings. Full time faculty can use this work on their annual faculty report for "internal service" credit.Keywords: assessment, assurance of learning, course design, institutional learning outcomes, rubrics, signature assignments
Procedia PDF Downloads 280290 Seismic Fragility Assessment of Continuous Integral Bridge Frames with Variable Expansion Joint Clearances
Authors: P. Mounnarath, U. Schmitz, Ch. Zhang
Abstract:
Fragility analysis is an effective tool for the seismic vulnerability assessment of civil structures in the last several years. The design of the expansion joints according to various bridge design codes is almost inconsistent, and only a few studies have focused on this problem so far. In this study, the influence of the expansion joint clearances between the girder ends and the abutment backwalls on the seismic fragility assessment of continuous integral bridge frames is investigated. The gaps (ranging from 60 mm, 150 mm, 250 mm and 350 mm) are designed by following two different bridge design code specifications, namely, Caltrans and Eurocode 8-2. Five bridge models are analyzed and compared. The first bridge model serves as a reference. This model uses three-dimensional reinforced concrete fiber beam-column elements with simplified supports at both ends of the girder. The other four models also employ reinforced concrete fiber beam-column elements but include the abutment backfill stiffness and four different gap values. The nonlinear time history analysis is performed. The artificial ground motion sets, which have the peak ground accelerations (PGAs) ranging from 0.1 g to 1.0 g with an increment of 0.05 g, are taken as input. The soil-structure interaction and the P-Δ effects are also included in the analysis. The component fragility curves in terms of the curvature ductility demand to the capacity ratio of the piers and the displacement demand to the capacity ratio of the abutment sliding bearings are established and compared. The system fragility curves are then obtained by combining the component fragility curves. Our results show that in the component fragility analysis, the reference bridge model exhibits a severe vulnerability compared to that of other sophisticated bridge models for all damage states. In the system fragility analysis, the reference curves illustrate a smaller damage probability in the earlier PGA ranges for the first three damage states, they then show a higher fragility compared to other curves in the larger PGA levels. In the fourth damage state, the reference curve has the smallest vulnerability. In both the component and the system fragility analysis, the same trend is found that the bridge models with smaller clearances exhibit a smaller fragility compared to that with larger openings. However, the bridge model with a maximum clearance still induces a minimum pounding force effect.Keywords: expansion joint clearance, fiber beam-column element, fragility assessment, time history analysis
Procedia PDF Downloads 435